Here's What You Will Find Inside Maya: Secrets of the Pros: Peter Lee of Storydale Inc. rides the power of Maya Unlimit...
276 downloads
1716 Views
41MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Here's What You Will Find Inside Maya: Secrets of the Pros: Peter Lee of Storydale Inc. rides the power of Maya Unlimited's Sub-Division Surfaces to build a better horse. And now you can too. Page 35
Eric Kunzendorf of Atlanta College of Art challenges you to fire up your senses and visualize your character's actions as you model and texture for efficient animation. Page3
Mark Jennings Smith of Digital Drama brings you "Organix," simple geometric shapes and Paint Effects brush strokes are transformed into complex naturallooking objects that are then brought to life. Page 51
Robin Akin and Remington Scott of Weta Digital combine motion-capture data with keyframed animation in three powerful tutorials to help you master the integration of these two types of animation. Page 85
Keep your characters from mouthing off as Dariush Derakhshani of Sight Effects and John Kundert-Gibbs and Rebecca Johnson of Clemson University show some economical and flexible methods for matching your model's lip motion to your audio track. Page 119
Continued on the inside back cover
Associate Publisher: Dan Brodnitz Acquisitions and Developmental Editor: Mariann Barsolo Editor: Pat Coleman Production Editor: Elizabeth Campbell Technical Editors: Remington Scott, Mark Bamforth, Eric Kunzendorf, Keith Reicher, Joshua E. Tomlinson Production Manager: Amy Changar Cover, Text Design, and Composition: Mark Ong, Side By Side Studios Technical Illustrations: Eric Houts, epic Proofreaders: Laura Ryan, Nancy Riddiough, Dave Nash, Laurie O'Connell Indexer: Lynnzee Elze CD Coordinator: Dan Mummert CD Technician: Kevin Ly Copyright © 2002 SYBEX Inc., 1151 Marina Village Parkway, Alameda, CA 94.501. World rights reserved. No part ofthis publication may be stored in a retrieval system, transmitted, or reproduced in any way, including but not limited to photocopy, photograph, magnetic, or other record, without the prior agreement and written permission o f t he publisher. LibraryofCongressCardNumber:2002103171 ISBN:0-7821-4055-6 SYBEX and the SYBEX logo are either registered trademarks or trademarks of SYBEX Inc. in the United States and/or other countries. Secrets of the Pros is a trademark of SYBEX Inc. Screen reproductions produced with FullShot 99. FullShot 99 © 1991-1999 Inbit Incorporated. All rights reserved. FullShot is a trademark of Inbit Incorporated. Star Wars: Episode 1—The Phantom Menace still courtesy of Lucasfilm Ltd. Copyright 1999 Lucas film Ltd. & TM. All rights reserved. Used under authorization. Unauthorized duplication is a violation of applicable law. Stills from The Perfect Storm used with permission. Copyright 2002 Warner Bros., a division of Time Warner Entertainment Company, L.P. All rights reserved. Cover images: Pinos Mellaceonus—© MarkJennings Smith, Cathedral/Dinosaur/Room images © 2002 Clemson University Digital Production Arts, Horse image—© Storydale Inc., CG Boat image—© ILM, Crowd image—© Emanuele D'Arrigo TRADEMARKS: SYBEX has attempted throughout this book to distinguish proprietary trademarks from descriptive terms by following the capitalization style used by the manufacturer. The author and publisher have made their best efforts to prepare this book, and the content is based upon final release software whenever possible. Portions of the manuscript may be based upon pre-release versions supplied by software manufacturer(s). The author and the publisher make no representation or warranties of any kind with regard to the completeness or accuracy of the contents herein and accept no liability of any kind including but not limited to performance, merchantability, fitness for any particular purpose, or any losses or damages of any kind caused or alleged to be caused directly or indirectly from this book.
The Software compilation is the property of SYBEX unless otherwise indicated and is protected by copyright to SYBEX or other copyright owner(s) as indicated in the media files (the "Owner(s)"). You are hereby granted a single-user license to use the Software for your personal, noncommercial use only. You may not reproduce, sell, distribute, publish, circulate, or commercially exploit the Software, or any portion thereof, without the written consent of SYBEX and the specific copyright owner(s) of any component software included on this media. In the event that the Software or components include specific license requirements or end-user agreements, statements of condition, disclaimers, limitations or warranties ("End-User License"), those End-User Licenses supersede the terms and conditions herein as to that particular Software component. Your purchase, acceptance, or use of the Software will constitute your acceptance of such End-User Licenses. By purchase, use or acceptance of the Software you further agree to comply with all export laws and regulations of the United States as such laws and regulations may exist from time to time. Reusable Code in This Book The author(s) created reusable codc in this publication expressly for reuse by readers. Sybex grants readers limited permission to reuse the code found in this publication or its accompanying CD-ROM so long as the author(s) are attributed in any application containing the reusable code and the code itself is never distributed, posted online by electronic transmission, sold, or commercially exploited as a stand-alone product. Software Support Components of the supplemental Software and any offers associated with them may be supported by the specific Owner(s) of that material, but they are not supported by SYBEX. Information regarding any available support may be obtained from the Owner(s) using the information provided in the appropriate read.me files or listed elsewhere on the media. Should the manufacturer(s) or other Owner(s) cease to offer support or decline to honor any offer, SYBEX bears no responsibility. This notice concerning support for the Software is provided for your information only. SYBEX is not the agent or principal of the Owner(s), and SYBEX is in no way responsible for providing any support for the Software, nor is it liable or responsible for any support provided, or not provided, by the Owner(s). Warranty SYBEX warrants the enclosed media to be free of physical defects for a period of ninety (90) days after purchase. The Software is not available from SYBEX in any other form or media than that enclosed herein or posted to www.sybex.com. If you discover a defect in the media during this warranty period, you may obtain a replacement of identical format at no charge by sending the defective media, postage prepaid, with proof of purchase to: SYBEX Inc.
Product Support Department 1151 Marina Village Parkway Alameda,CA94501 Web: http://www.sybex.com After the 90-day period, you can obtain replacement media of identical format by sending us the defective disk, proof of purchase, and a check or money order for $10, payable to SYBEX. Disclaimer SYBEX makes no warranty or representation, either expressed or implied, with respect to the Software or its contents, quality, performance, merchantability, or fitness for a particular purpose. In no event will SYBEX, its distributors, or dealers be liable to you or any other party for direct, indirect, special, incidental, consequential, or other damages arising out of the use of or inability to use the Software or its contents even if advised of the possibility of such damage. In the event that the Software includes an online update feature, SYBEX further disclaims any obligation to provide this feature for any specific duration other than the initial posting. The exclusion of implied warranties is not permitted by some states. Therefore, the above exclusion may not apply to you. This warranty provides you with specific legal rights; there may be other rights that you may have that vary from state to state. The pricing of the book with the Software by SYBEX reflects the allocation of risk and limitations on liability contained in this agreement ofTerms and Conditions.
Software License Agreement: Terms and Conditions
Shareware Distribution This Software may contain various programs that are distributed as shareware. Copyright laws apply to both shareware and ordinary commercial software, and the copyright Owner(s) retains all rights. If you try a shareware program and continue using it, you are expected to register it. Individual programs differ on details of trial periods, registration, and payment. Please observe the requirements stated in appropriate files.
The media and/or any online materials accompanying this book that are available now or in the future contain programs and/or text files (the "Software") to be used in connection with the book. SYBEX hereby grants to you a license to use the Software, subject to the terms that follow. Your purchase, acceptance, or use of the Software will constitute your acceptance of such terms.
Copy Protection The Software in whole or in part may or may not be copy-protected or encrypted. However, in all cases, reselling or redistributing these files without authorization is expressly forbidden except as specifically provided for by the Owner(s) therein.
Manufactured in the United States of America 10987654321
To all our family, friends, and colleagues. And to Kristin, Joshua, and Kenlee. —John Kundert-Gibbs
Acknowledgments Florian Fernandez Charles LeGuen Miles Perkins, ILM Jacklyn Pomales, Warner Brothers Chris Holm, Lucasfilm Ltd. At Clemson University: James Barker, Doris Helms, Bonnie Holaday, Mark McKnew, Robert Geist, and Mike Westall At Sybex: Acquisitions and Developmental Editor Mariann Barsolo, Production Editor Elizabeth Campbell, Editor Pat Coleman, Production Manager Amy Changar, and Designer Mark Ong Chapter 4, Robin Akin and Remington Scott: Florian Fernandez, for providing the animation model, setup, and accompanying MEL script for the tutorials on the CD that accompanies the book. You can visit his web page at www.flo3d.com. Spectrum Studios, for providing the motion capture data used in the tutorial. Chapter 6, Emanuele D'Arrigo: John KundertGibbs, our coordinator, for his friendliness and constant encouragement. Robin Akin, for involving me in this project with so many incredibly experienced professionals. Munich Animation and its 3D supervisor Hartmut Engel, for allowing me to use their technical
resources. Chris Faber, one of the most amazingly talended and complete 3D artists, for his extremely precious lGB of RAM. Thomas Hain, thanks again for your hints. Simone Bruschi, who gave me the first suggestions on how to do CGI professionally. Franco Valenziano, who left me in front of Softimage on an SGI, with three tutorials to do in one hour. Carlo Alfano, to whom I dedicate my chapter. I am grateful for the chance that he gave me many years ago, offering an internship with the crew of Locomotion and kick-starting my happy career. Chapter 8, Habib Zargarpour: The incredible ILM crew of 120 artists who worked on The Perfect Storm, for their dedication and perseverance in what is likely to be the most difficult show they have worked on. VFX supervisor Stefen Fangmeier kept a consistent vision throughout the project, and ILM producer Ginger Thieson put up with my endless requests for reference! Many people contributed to this show, and in particular 1 would like to acknowledge John Anderson for his experience and knowledge of fluid dynamics and physics; we benefited greatly from his wisdom. My R&D team of technical directors: Joakim Arnesson, Pat Conran, Niel Herzinger, Chris Horvath, Erik Krumrey, Masi Oka, and Mayur Patelm. They all made amazing contributions and made it seem like anything was possible.
in
iv
About the Authors
About the Authors John Kundert-Gibbs, lead author John Kundert-Gibbs is director of the Digital Production Arts program at Clemson University, which is preparing the technical directors of the future. Author of a number of publications on Maya, computer graphics, and dramatic literature, he directs students in producing animated shorts, creates effects for live action projects, and designs electronic media for theatrical productions. He is co-author of Mastering Maya 3 and Mastering Maya Complete 2 and has a Bachelor of Arts degree in physics from Princeton University and a Ph.D. in dramatic literature frorn Ohio State University.
Robin Akin Robin Akin has been an animation supervisor for a CG/live-action television docu-drama and an animator and digital artist on feature films, commercials, and television series. Currently, Robin is working at Weta Digital on The Lord ofthe Rings: The Two Towers, the second film of the Tolkien trilogy. She has also worked for Digital Domain and SquareUSA, among other studios. Her animation credits include the feature films Titanic and Final Fantasy: The Spirits Within and Clio Award-winning commercials for clients such as Jeep and Coca-Cola.
Emanuele D'Arrigo Emanuele D'Arrigo began working in computer graphics in \ 995 in Italy before relocating to Munich to work as a freelance artist on various projects, such as the full-length animated feature Help! I'm a Fish!, a Danish-German coproduction, and the pilot episode for the science-fiction TV series Ice Planet, produced by H5B5. Professionally born on Softimage, he's now a TD mainly working with Maya, specialized in scripting, crowds, and effects. At the time of the writing, he's still based in Munich but has plans to move to England and eventually to San Francisco. His motto is: "Believe in your team, believe in your dreams."
Timothy A. Davis Timothy A. Davis is currently an Assistant Professor in the Computer Science Department at Clemson University and has played an active role in developing the interdisciplinary master's program in Digital Arts Production, which trains students to produce special effects for entertainment and commercial projects. His teaching and research interests include parallel/distributed rendering, ray tracing, spatio-temporal coherence, and non-photorealistic rendering. He received his Ph.D. from North Carolina State University in 1998 and has worked in technical positions for the Environmental Protection Agency and NASA Goddard Space Flight Center.
Dariush Derakhshani Dariush Derakhshani is a senior CGI effects animator with Sight Effects in Venice, CA, working on awardwinning national television commercials. He has won the Bronze Plaque from the Columbus Film Festival and has shared honors from the AICP and London International Advertising Awards. He has worked as a CGI animator and compositor on a variety of projects from films to television and was a Supervising Technical Director for the South Park television series. Dariush also enjoys splitting his time consulting and teaching at a variety of schools, including USC Film School's MFA Animation program and writing as frequently as he can. His works have appeared on thescratchpost.com and various sites of the digitalmedianet.com, and he is the Senior Editor of taintmagazine .com. He is the author of an upcoming book on Maya. Dariush has a Bachelor of Arts in architecture and in theater from Lehigh University and a Master of Fine Arts in animation from USC Film School. He is bald and has flat feet.
Eric Kunzendorf Eric Kunzendorf is co-chairman of Electronic Arts in the areas of animation, multimedia, and digital art at The Atlanta College of Art and has been teaching computer graphics and animation at the college level for the past nine years. His animations, Final Project Assignment and Mime In A Box, have made appearances at the SIGGRAPH Computer Animation Festival
About the Authors
in 1999 and 2000, respectively. He holds a Bachelor of Arts in art history from Columbia University and a Masters of Fine Arts in drawing and painting from the University of Georgia and has exhibited computer-generated prints nationally.
Peter Lee Peter Lee is director at Storydale Inc. in South Korea, where he supervises TV series, commercials, movies, and game companies. He has worked as a 3D artist on projects such as Columbia Tristar's The Nuttiest Nutcracker, New Line Cinema's/asow X, and Jon Nappa's The Super Snoopers; co-authored Sybex's Mastering Maya series; and taught computer animation at ITDC, University of Toronto. He has also lead projects such as games movies for Dong Seo Game Channel's PC game The Three Kingdoms III and Joycast's PS2 game Wingz.
Frank Ritlop Frank Ritlop is a Canadian living in New Zealand. He has been working in the CG industry for more than nine years and has a degree in film animation from Concordia University in Montreal. He has worked as a lighting supervisor for CG studios in Montreal and Munich and as a lighting artist at Square USA on Final Fantasy: The Spirits Within. He is currently working at Weta Digital as a Shot Technical Director on the second installment of the Lord of the Rings trilogy.
Remington Scott Remington Scott is currently overseeing motion capture production at Weta Digital for The Lord of the Rings: The Two Towers and was the Director of Motion Capture at Square Pictures for Final Fantasy: The Spirits Within and The Matrix: Flight of the Osiris. He has professionally created digital animations for 16 years. During this time, he was the Interactive Director at Acclaim Entertainment for the multi-platinum selling Turok: Dinosaur Hunter, and he also co-created the first digitized home video game in 1986, WWF: Superstars ofWrestling.
v
MarkJennings Smith Mark Jennings Smith has been interested in CG since 1972, when at age 10 a chance encounter with the first coin-op PONG changed his life. A CG historian, Mark would love nothing more than discussing industry topics from CG philosophy, industry politics, and pixels as fine art to character animation, flocking simulation, and the rise of the synthetic actor. He enjoys collecting fossils of CG past, on film and video. He sites William Latham, Yoichiro Kawagichi, Karl Sims, and David Em as influences among others. His art and words can be found from cover to page, in several CG/FX books and magazines. His film work has run the gamut from Universal Pictures to Roger Corman himself. His latest projects deal with "EXstreme Organix" (www . a b s y n t h e s i s . com), a cult comic-book hero, and a study of the past, present, and future of virtual humans.
Susanne Werth Susanne Werth got into animation in Mexico in 1997. She graduated from Fachhochschule Furtwangen in computer science and media and started out as a character animator with a children's TV series in Potsdam, Germany. After a period of time working as layout artist and layout supervisor, she went back to animation and stepped forward into MEL scripting. She currently works as character animator and MEL programmer at a children's TV production company in Germany.
Habib Zargarpour Habib Zargarpour is Associate Visual Effects Supervisor at Industrial Light & Magic. He is a recipient of the British Academy Award for Best Achievement in Visual Effects, as well as being nominated for an Academy Award for Best Achievement in Visual Effects for his work on both The Perfect Storm and Twister. His other credits include the upcoming Signs, Star Wars: Episode I The Phantom Menace, Spawn, Star Trek: First Contact,Jumanji, Star Trek: Generations, and The Mask. Habib joined ILM in the early '90s, after working as a graphic artist and fine arts illustrator since 1981. He received his B.A.S.C. in mechanical engineering from the University of British
vi
About the Authors
Columbia in Vancouver and went on to graduate with distinction in industrial design from the Art Center College of Design in Pasadena in 1992.
Technical Editors Remington Scott See above.
Student Contributors Mark Bamforth Robert Helms Robert Helms is a graduate student pursuing a Master of Fine Arts in digital production arts at Clemson University. He received his Bachelor of Science in electrical engineering from Clemson University. He particularly enjoys the programming and modeling aspects of the field. In his spare time Robert enjoys reading, painting miniatures, and SCUBA diving when he has enough money.
Mark Bamforth works in the Multimedia Department at PricewaterhouseCoopers. He has been working with 3D applications for 9 years and programming for 19. His animated short Space Station Fly-Through was screened at Siggraph 2000. He is currently working toward various continued education degrees in Maya and special effects at New York University's Center for Advanced Digital Applications. Eric Kunzendorf
Rebecca Johnson Rebecca Johnson, from Sumter, South Carolina, is completing work on a Master of Fine Arts in digital production arts at Clemson University. She was trained as a visual artist at Lander University, and her interest in commercial design evoked enthusiasm for computer-generated art. She enjoys combining her interests in art and computer design through character development and animation. While studying at Clemson, she has participated in many animated productions, undertaking roles as art director, modeler, and animator. Currently she is researching impressionistic rendering techniques. Rebecca hopes to continue to work in the field designing and modeling characters. Jacob Richards Jacob Richards started programming when he was 11 years old and worked on everything from the C-64 to the Apple IIe and up to the SGI Onyx machines. He has been working in 3D for more than five years. He received his Bachelor of Science in computer science from Clemson University in 1999 and is now working toward his Master of Fine Arts in digital production arts at Clemson. Jake is presently interning at Pixar Animation Studios.
See above. Keith Reicher Keith Reicher is a multimedia artist/3D animator currently freelancing in New York. He graduated from Pratt Institute with a Master of Fine Arts in computer graphics. Keith is the sole creator of the 3D animated short Benjamin Task, on which he functioned as writer, modeler, animator, and music composer. His interest in visual effects and animation began with the first Star Wars movie. He is now working toward a career in the film industry. Joshua E. Tomlinson Joshua E. Tomlinson is a techie by trade but an artist at heart. He has a love for both the technical and aesthetic aspects of CGI. He receieved his Bachelor of Science in computer science from Wofford College in 2000 and is now finishing his Master of Fine Arts in digital production arts at Clemson University.
Contents
Part One
Beginnings: Modeling
1
one Accelerating the Preproduction Process Eric Kunzendorf Getting from Concept to Animation 3 Scriptwriting, Thumbnailing, and Drawing 4 Modeling Methods 8 Texturing a Flour Sack 16 Setting Up for Animation 19 Creating an Arm 22 Blobbyman Comes to Life 31 Summary 33 two
Modeling a SubDivision Surface Horse
35
Peter Lee Creating Basic Shapes with NURBS Conversion to Polygons 37 Modeling Details 39 Summary 49 three
35
Organix—Modeling by Procedural Means 51 MarkJennings Smith Stop and Smell the Roses: A Primer 52 A Visual Maya Zoo 54 Bring Simple Life to a Creation 56 Abusing the Cone 59 Alternate Uses of Paint Effects 65 Organix and Paint Effects 68 Summary 81 VII
viii
Contents
Part Two
four
Putting Things in Motion: Animation 83
Animation and Motion CaptureWorking Together 85 Robin Akin and Remington Scott What Is Motion Capture? 85 Animation and Mocap 86 When Should You Use Mocap? 7 Using Motion Capture as Reference or Straight out of the Box 88 Cleaning Mocap 91 Summary 117
five Lip-Synching Real-World Projects
119
John Kundert-Gibbs, Dariush Derakhshani, and RebeccaJohnson Creating a Lip-Synched Animation 120 Hands-on Example 1: Lip-Synching Using Blend Shapes and the Trax Editor 128 Hands-on Example 2: Photo-Real Facial Replacement and Lip-Synching 149 Summary 163 Part Three
The Numbers Game: Dealing with Complex Systems 165
six Creating Crowd Scenes from a Small Number of Base Models 167 Emanuele D'Arrigo Not Only Big Studios 168 Heroes 169 Deploying Three Task Forces 170 Always Room for Improvement 192 Summary 193
Contents
seven
Taming Chaos: Controlling Dynamics for Use in Plot-Driven Sequences 195 John Kundert-Gibbs and Robert Helms When To Use Rigid Body Simulations 196 The Setup: Plan, Plan, Plan 196 Preproduction: Research and Development 197 Production: Getting the "Shot" 98 Integration and Postproduction: Cheating the Shot So It Works 199 Working Example 1: A Quasi-Deterministic Pool Break 201 Working Example 2: Shattering Glass Over a Live-Action Plate 212 Summary 221
eight
Complex Particle Systems
223
Habib Zargarpour Industrial Lightand Magic Preproduction: Research and Development 223 Choosing the Best Method: Which Type of Simulation to Use 231 Visualization Techniques: Tips and Tricks 241 Putting the Method to the Test: Pushing the Limits 246 Advanced Techniques 247 Creating the Button 257 Testing and Documenting Before Postproduction 258 Final Result 259 Summary 260 nine
Creating a Graphic User Interface Animation Setup for Animators 263 Susanne Werth Planning the GUI 264 Creating the Window 268 Procedures 278 Warnings and Other User Helpers 288 Summary 289
ix
x
Contents
Part Four
Endings: Surfacing and Rendering
291
ten
Effective Natural Lighting
293
Frank Ritlop Lighting Design 293 Lighting Passes 294 The Properties of Light 294 Mapping a Texture to a Light's Color Attribute 295 Light Types 297 Shadows 304 Light Rigs 319 Finishing Touches 322 Summary 323 eleven
Distributed Rendering
325
TimothyA. Davis Jacob Richards The Maya Dispatcher 326 Tools and Recommendations The Socks Program 335 Summary 349
332
Introduction
Introduction When, in June 2001, Mariann Barsolo (Acquisitions and Developmental Editor at Sybex) and I first discussed putting Maya: Secrets of the Pros together, we had little inkling what a massive, international endeavor this book would be—nor how successfully it would turn out. Through good contacts and good fortune, Maya: Secrets of the Pros is truly a compendium of the global state-of-the art of Maya—and indeed 3D graphics as a whole. In these pages are authors from the Far East, Europe, Canada, and the east and west coasts of the United States. Our writers include those working for large effects houses, those working in (or owning) small CG production studios, and those teaching in universities. Our group includes a Maya professional from Italy working in Germany, two from the United States writing in Europe, and three who traveled from Europe or the United States to Australia or New Zealand during the course of writing their chapters. In short, we bring together in these pages a cross-section of the Maya community at large. What does this smorgasbord of globe-trotting Maya professionals from all walks of life mean for you, the interested reader? It means we will serve insights and instruction from the best of the best for you. It means—both on the pages that follow and on the included CD-ROM—you will have access to some of the best techniques, tips, and source materials ever collected in a book intended for public dissemination. It means you will gain understanding of the business, science and art of 3D graphics. And it means you will gain a global perspective about what makes us all passionate about the world of 3D graphics, however we practice it individually.
About this Book One thing worth noting is that Maya: Secrets of the Pros is not for the neophyte: If you don't know the difference between a manipulator handle and a spline curve, you're probably better off getting another book first (like, say, Mastering Maya 3 from Sybex). If, however, you're an advanced hobbyist or especially a professional who makes your living doing 3D graphics, this book is definitely for you—in fact, we built it from the start to be for you! In fact (and this is worth pointing out in bold face), even if you use other 3D
xi
packages along with or instead of Maya, this book is still for you. Although the scene files are all Maya, the workflow strategies and insights apply to just about any professional 3D package. We have attempted, whenever possible, to keep this book version-agnostic, if not package-agnostic. In other words, we feel the knowledge contained in this book is too timeless to go out of style whenever the next version of Maya comes out, so we concentrated on presenting workflow, technique, and creative problem-solving practiced by professionals rather than just the latest bells and whistles Maya provides (though there's plenty of "wow" factor to each of the chapters). So, whether you're working in Maya 4 or Maya 21, the knowledge in this book will challenge and inspire you to create better, more efficient, and more beautiful work. To get a better feeling for the buffet of tasty 3D dishes in store for you, try flipping through the book's pages (as if you didn't do that already). Just in case you want a little more of an appetizer for each chapter, here's what's in store for you, chapter by chapter. Part One: Beginnings: Modeling Chapter 1 starts at the beginning, moving from conceptual work to modeling efficient characters for your animation work. Eric Kunzendorfwalks you through the steps of visualizing your character and its eventual actions and then modeling and texturing the character so that your animation work will be a breeze rather than a chore. In Chapter 2, Peter Lee presents a concise, powerful method for creating complex organic forms (in this case a realistic horse) using Maya Unlimited's amazing modeling tool, SubDivision Surfaces. This chapter not only shows you how to use one of today's cutting-edge tools, but shows how professionals create complex shapes quickly and efficiently. Chapter 3 takes us to a whole new plane of modeling and animation. Here Mark Jennings Smith shows us how, through repetition of simple geometric shapes and Paint Effects brush strokes, to create eerily natural shapes he terms "organix." And if modeling these objects is beautiful, animating them is something close to meditation at a computer screen!
xii
Introduction
Part Two: Putting Things in Motion: Animation In Chapter 4, Robin Akin and Remington Scott cover the fascinating art and science of combining motioncaptured data from live actors with keyframed animation dreamed up by the animator—you—sitting in a studio. Using high-quality motion capture data and a powerful animation setup, the authors walk you (literally!) through the process of combining these two types of animation. Chapter 5 covers the exacting art of lip-synching mouths with pre-recorded audio tracks. Using examples from both cartoon and photo-real cases, Dariush Derakhshani, John Kundert-Gibbs, and Rebecca Johnson present the most efficient and flexible methods for matching your model's lip motion to your audio track, and they even provide tips on getting the best sound recording you can for your work. Part Three: The Numbers Game: Dealing with Complex Systems In Chapter 6, Emanuele D'Arrigo shows you how to create an entire crowd of "Zoids" from a single base model and then how to get them all to do "the wave" on cue! In this chapter, you will learn how to vary the size, shape, and color of your base model and how to introduce random variety to your characters' actions, producing believable crowd motion in no time flat. In Chapter 7, John Kundert-Gibbs and Robert Helms tackle the problem of taming Maya's built-in dynamics engine to produce specific, controllable effects for specific animation needs. Whether you need tight control over a stylized reality or to get a complex effect to look as real as "the real thing," this chapter will provide insight into getting the job done on budget and on time. Chapter 8 is all about particles, particles, and more particles! In this chapter, Habib Zargarpour takes you on a backstage tour of the process used to create the massively powerful and realistic wave mist from The Perfect Storm. Here, from one of the true masters of the insanely complex, are the secrets to working with huge datasets of particles to create photo-real production work. If you've ever wanted to create a simple graphical control for your animated characters, Chapter 9 is for you. First Susanne Werth helps you refine your goals in creating a GUI animation control; then she
walks you through the steps of creating one yourself. When you finish, you'll have a working template into which you can fit just about any character to speed up your (or someone else's) animation task. Part Four: Endings: Surfacing and Rendering In Chapter 10, Frank Ritlop shows you how the pros do lighting. Not satisfied with a few lights shining on his example scene, he meticulously demonstrates the layers of lights that need to be built up to form the beautiful finished product. Once you've worked through this chapter, you'll have the understanding and confidence to create your own complex and beautifully lit scenes. Chapter 11 is all about getting the most out of your rendering machines, whether they are dedicated render farm boxes or are being shared by users and render tasks. Timothy A. Davis and Jacob Richards explain the general theory of running large render jobs in parallel and then take you on a tour of their own distributed renderer—included on the CD—which you can use to speed up your rendering tasks.
About the CD The CD-ROM that accompanies this book is packed with useful material that will help you master Maya for yourself. In addition to a working demo of xFrog (a 3D modeling package that creates some very cool organic models), we have scene files, animations, and even source code relating to the chapters in the book. Some CD highlights are: • Models, textures, and animation of a flour sack • A completed subdivision surface horse, plus animation of it walking • "Organix" scene files and animations (see Chapter3) • Motion-capture data and a pre-rigged skeleton to place it on • Models and animation demonstrating various lip-synching techniques • Models and MEL code for generating crowd scenes, plus animation of this crowd doing "the wave" • Models demonstrating rigid body solutions to a billiards break and shattering glass, plus animation of various stages of work on these scenes
Introduction
• A simplified version of the "crest mist" generating scene used in The Perfect Storm • MEL code for generating character animation GUI controls • Scenes with models and textures for use in practicing lighting technique • A completely functional Windows-based distributed rendering solution, plus source code for it As you can see from this list, rather than having to create entire scenes from scratch for each chapter, the accompanying scenes and animations get you started and help guide you to appropriate solutions quickly and efficiently. Additionally, after you go through a chapter once, you can grab a sample scene file or bit of code and play with it, finding your own unique and wonderful solutions to the challenges presented in each chapter.
Staying Connected To stay up-to-date on the Maya: Secrets of the Pros, please go to the book's page at w w w . s y b e x . c o m . If you have any questions or feedback, please contact John Kundert-Gibbs at j k u n d e r t @ c s . c l e m s o n . e d u .
Sharing Our Experiences As you can see, the subjects covered in this book unveil nearly all Maya has to offer and likewise most areas of 3D graphics. Often these authors will reveal little known secrets or point out ways to perform a task that make that job much quicker than you might have imagined. Whether you're an animator, a surfacing artist, a
xiii
Technical Director, or someone who loves all the phases of 3D production, there is a delicious dish somewhere in this book for you. And whether you proceed from first to last course or pick and choose your meals, there will be something in Maya: Secrets of the Pros to satisfy your appetite for months if not years to come. What has become clear to all of us who worked on this book is that, no matter how long you have worked with Maya or in 3D graphics, there is always more to learn, and that the joy of learning is half the fun of working in this profession. We also have been inspired and amazed by one another's work. Finally, we have distilled in our little enclave one of the greatest aspects of working in the graphics community: there is an openness, generosity, and willingness to share in this community that is simply astounding. As you sample the chapters in this book, recall that someone is serving you years of hard-won experience with every page you read and every exercise you work through. All you have to do is pull up a chair and dig in. Working on these pages has been a reinvigorating experience. All of us have felt again the joy of discovery and the equally wonderful joy of helping others who appreciate our unique talents. We trust you will feel our heartfelt love of what we do, and in sharing it with you, on any given page. And don't forget to share your passion with others! We have had great pleasure in preparing this eleven-course meal, and invite you to partake of the feast we have set out for you! —John Kundert-Gibbs April 28, 2002
Accelerating the Preproduction Process Eric Kunzendorf
/'// discuss some strategies to accelerate preproduction as well as the early stages of production. In most individual animation projects, these early phases take the lion's share of time. Often animators, especially those learning the art form, spend so much time scriptwriting, thumbnailing, drawing, modeling, texturing, and rigging that they run out of time and inspiration for what was originally the focus of their efforts: the animation itself. Getting from Concept to Animation You create and animate characters in the following interrelated and often interwoven stages: Scriptwriting, deciding what your character will do. Thumbnailing, deciding how your character will look. Drawing, deciding how the details of your character will look. Modeling, creating geometry for characters, props, and backgrounds. Texturing, creating materials for your character and scenes. Setup, wrapping geometry around bones and other deformers (sometimes called enveloping). Layout, arranging characters in your scene. Animating, making your characters move. (Layout is often combined with this phase.) Lighting, making your animations visible for the camera. Rendering, taking moving pictures or "filming" the animation you've created. Post-process, sequencing rendered frames with sound, compositing, editing, and output to tape. As you can see, the process is long, and almost everyone who creates 3D animation follows it to some extent. Game, motion graphic, commercial advertising, animation broadcast
4
C H A P T E R 1 • Accelerating the Preproduction Process
video, and feature film studios all use this methodology to varying degrees. But these are examples of team-based production; individual animation producers/directors, such as those trying to produce an animation for use on their demo reels, are responsible for each phase themselves. Spending too much time on any one phase or group of phases can be fatal to the entire process. As an animator, I tend to focus on getting to the animation phase as quickly as possible, while still having an appealing and easy-to-animate character. My goal in this chapter is to show you how to speed up these early phases of the CG (computer graphics) animation process. Becoming skilled at computer animation involves negotiating the dichotomy of learning the technical processes of the discipline while attempting to master the art forms of quality character animation. Complicating this process is the complexity of character animation itself, the enormity of Maya (or other high-end 3D packages) as a piece of software, and the time constraints inherent in any animation project. Nobody has an unlimited amount of time for work (and making peace with that fact is a superb first lesson), so operating within a given amount of time is an inevitable constraint. Coming to grips with the limited resource of time and comparing it with the enormous volume of work involved makes you realize how daunting the challenge of creating quality CG animation really is—especially in a solo or small group project! "It's [baseball] supposed to be hard. I fit wasn't bard, everyone would do it. The 'hard' is what makes it great."—}immy Dugan (Tom Hanks) in A League of Their Own Computer animation is like baseball, but with the added difficulty of being perceived as easy to accomplish. Unfortunately, the rah-rah making of documentaries, feature-laden advertisements, and skilled demonstrations by power-using industry professionals does nothing to dispel this idea. These facts, coupled with the intrinsic coolness of creating animated creatures and worlds, have lead thousands of individuals to try their hand at animation. Lured by obsolete tales of Hollywood discovering, hiring, and training people off the street and egged on by the desire to see their creations come to life, people often work diligently to learn the software but then become puzzled and, in some cases, discouraged when their skills are not marketable at their desired employment level. This is especially true of character animation. Character animation is hard, and that "hard" truly does make it great! If it were easy, everybody would do it well. Animating well requires that you focus on what you want to accomplish as precisely as possible before sitting down at the computer. The first three phases of the animation process—scriptwriting, thumbnailing, and drawing—are often called preproduction. They are key because you have precious little time to complete an animation, and the planning, far from increasing production time, actually shortens it.
Scriptwriting, Thumbnailing, and Drawing Deciding what you want your character to accomplish should be an obvious first step, but I am continually amazed at the huge effort young artists put into creating characters with absolutely no idea what they want them to do. Then, when they try to animate them, they become frustrated when the characters will not do what they want. The first rule of animation (and all types of film making, for that matter) is: put the story first. All great movies and shorts have stories that every aspect of the production process must serve. Pixar, PDI, and
Scriptwriting, Thumbnailing, and Drawing
Disney all followed this idea in every critically and financially successful animated movie they produced. They come up with interesting stories and then expend enormous amounts of R&D, artistic effort, and technical resources to produce their movies. Game developers operate on much the same principle. Scriptwriting is an art form unto itself. The intricacies of the three-act narrative and beyond are way beyond the scope of this short chapter. So, recognizing that a larger story will provide a narrative framework in which the discrete actions of the character carry that story, I'll confine my discussion of "script" to individual actions within that story. This process works well when you are working to create a demo reel, which can be made up of individual animation exercises as well as larger, more complex narratives. Although having no plan is bad enough, having a plan that is too general is arguably worse. One of my most difficult tasks in guiding animation students is paring down their ideas to manageable levels. Their troubles begin when they present me with ideas such as the following: • "I want my character to fight." • "I want my character to show emotions." • "I want to do a single skin character." Why? "Because it's a test of my modeling skills." • And my all-time favorite: "I want to create a realistic character." And these are only a few. These goals, while noble, are much too general. Hidden in these simple statements are vast amounts of work that can rapidly overwhelm an individual. For example, the "I want my character to fight" story idea lacks some important descriptive work. Unless the character is going to shadow box, the animator will need to model, texture, and rig another character. So the animator has to do twice the work. Next is the question of what type of fighting the characters will use. Will it be American-style boxing, karate-do, wushu, or drunken monkey style kung fu? Will it be kicks, punches, throws, or grappling? Every animator has some desire to create Matrix-style, "wall-walking" fight scenes, but many fail to realize that each move within those fight scenes is extremely well thought out and choreographed. In animation, you may need to build and rig the character in several specific ways to accommodate those actions. The cure for such over-generalization is a detailed description of every action. Therefore, an acceptable beginning for each of these ideas in the earlier list would go something like this: • "My character will be required to throw high, low, and middle-height snap and roundhouse kicks at targets to the front, the side, and the back. He will also need to punch and block with fists, open hands, and elbows." • "My character will need to happily say the phrase 'the rain in Spain falls mainly in the plane' while winking leeringly off camera and sneaking cautiously off stage." • "My character is a superhero in a red, skin-tight body suit. He will wear skin-tight yellow gloves that rise to mid-forearm and shoes that rise his shin. He will wear tight blue Speedo-style shorts and have a big blue T emblazoned on his chest. His body suit will extend all over his head Spiderman style with only cutouts for eyes. He will need to run, jump, fly, kick, and punch." • "I want my character to appear naturalistic enough to engage the audience. I will concentrate on making her eyes, face, and hair convincing so as to be better able to convince the audience of the emotions she will have to display."
5
6
C H A P T E R 1 • Accelerating the Preproduction Process
Although these statements are substantial improvements over the first set of ideas, these statements are only the beginnings of practical concepts. The next step is to describe in detail what the character will actually do within the context of the story. You need to plan each gesture, action, and expression before you begin modeling. By describing each action in meticulous detail, you'll get a good feel for exactly what you will need to animate, and unrealistic expectations will evaporate under the hot light of thorough examination. Concurrently with writing the story for the character, you must draw constantly. Visualizing both the appearance of the character and the performance that character will give is essential to completing a successful animation. At this point, the type of preproduction depends on your individual skills. If you are a strong draftsperson, you should spend more time drawing than writing; if you're a weak draftsperson, you can spend more time writing if you are more comfortable doing so. These drawings need not be highly detailed. In fact, you can indicate motions with simple, not-much-more-than-stick figures drawn rapidly (see Figure 1.1). The quantity of drawings is what is important at this stage; drawing the character accurately is a secondary consideration. Furthermore, you need to fully describe each motion in the script. A kick should have at least four drawings: a ready pose, a raised leg pose, a fully extended kick pose, and a recovery pose (see Figure 1.2). Keep in mind that these drawings help when you begin animating. But wait! We haven't decided what the character even looks like, so how can we decide what it will do? This vagueness is intentional because when possible, the final design of your character is determined by what you, the animator, plan to do with it. Remember that every facet of the animation serves the story you plan to tell, and you relate that story primarily through the actions of the character. Naturally the appearance of that character is secondary to the story itself. For example, it is counterproductive to expect that an obese character can bend over, touch its toes, and vault into a forward flip (this is not impossible, but the modeling and rigging chores would be a technical director's time-consuming nightmare!). If Figure 1.1: Some gesture drawings of rapidly drawn figures your goal is to create a believable performance, the
Figure 1.2: Motion drawings of a kick
Modeling Methods 7
model/character should help to achieve that goal. Conversely, you might already have a character pre-built or preconceived. In such cases, it is up to you to find appropriate actions for that character. Furthermore, animators working in an effects company may have this decision taken out of their hands entirely because they have to animate someone else's character; the animator then has no choice but to work with what they are given. My point is that animators should do everything possible to enhance their ability to create a believable performance, and to the extent the animator has control, the design of the character should not make that goal more difficult. Drawing is the beginning of modeling, so the beginning of modeling doesn't happen in Maya, but on paper. Here, there is no substitute for drawing ability. Having determined what your character can do, draw the character. If you are not proficient in drawing, you will have to either muddle through or hire someone who is proficient. Ideally, the final product of this phase is a fully posed "character" drawing that reveals the personality of the model (see Figure 1.3). You use this drawing as reference/inspiration later when animating. More important, you need to produce a front and side schematic drawing of the character. The details of these drawings should match in both views. Graph paper is useful for lining up details (see Figure 1.4).
Figure 1.3: A character drawing
Figure 1.4: A schematic drawing of a figure
8
C H A P T E R 1 • Accelerating the Preproduction Process
Modeling Methods Whereas scripting, thumbnailing, and drawing provide the conceptual basis for your work, the model provides the visual expression of your vision. Producing the model and rigging it for animation are the most time-consuming and laborious phases of any animation. You must carefully choose the method you use to create your characters: this choice has dramatic implications later in terms of setup, animation, texturing, and rendering. The mistaken tendency of most people is to believe that modeling options are simply divided between polygons and NURBS surfaces. This is a harmful oversimplification, especially given the advent of Subdivision surfaces available with Maya Unlimited. Rather, a more useful discussion centers around the degree to which a character breaks down into separate objects. Single skin vs. separate objects becomes the real question, and it is a visual rather than a technical problem. The great mistake that many beginning independent animators make is trying to force the idea of animating a "single skin" character. They tend not to realize that few, if any, characters that they see either on TV or in movies are all one piece of geometry. Most, if not all, are segmented geometry; the seams are cleverly hidden by another piece of geometry or by the border between textures. This isn't noticeable because the character's geometry is so well animated that we are convinced of the gestalt of the character. We don't see him, her, or it as separate pieces; we see the character Woody, Buzz, Shrek, or Aki. Thus, the central point in modeling for animation is that a well-animated piece of geometry always transcends the modeling method used to create it. Consequently, you want to use the simplest modeling method necessary to create a character to animate. Four basic modeling methods are available in Maya: • • • •
NURBS patches Trimmed and blended NURBs objects Polygons SubDivision surfaces
Each of these methods has its own advantages and disadvantages when modeling characters for animation.
NURBS Patches Using NURBS (Non-Uniform Rational B-Spline) is the classic way to model organic shapes in Maya. NURBS consist of four-sided patches that are often perplexing to artists who are new to Maya. At present, you cannot directly manipulate a NURBS surface; modeling occurs when you manipulate hulls and control vertices (CVs). Manipulating patches requires that you pay careful attention to surface UV parameterization. Edge and global stitching ostensibly ensure smooth joins between patches; unfortunately, keeping edges joined through animation can be difficult. Also, without a 3D paint program (such as Deep Paint), texturing NURBS with complex image maps can be problematic. Nevertheless, NURBS surfaces are efficient; so they animate quickly; and properly built, they provide a highly organized surface that you can easily convert to polygons.
Trimmed and Blended NURBS Objects Trims and blends are part of the classic Maya character-modeling paradigm. Maya uses curves projected on a NURBS surface to determine areas that are hidden from view. These
Modeling Methods
hidden areas are called trims. You can connect these holes relatively smoothly to other objects with blends. Blends are pieces of geometry that are interactively generated between two curves or isoparms. As each curve moves, the blend follows. Resembling lofts, these curves also maintain a certain amount of tangency with their generating surfaces. Because there are fewer surfaces overall, trimmed and blended surfaces are generally easier to texture map. Unfortunately, when activated, they slow animation unacceptably and are unusable for interactive performance.
Using Trims and Blends Using the Set Driven Key command, you can interactively block the evaluation of the NodeState attributes of trims and blends. Setting the NodeState attribute of the ffFilletSrfn node to Blocking and the NodeState attribute of the Trimw node to HasNoEffect, you can use trims and blends in character animation on a limited basis. I usually use this technique to connect the eyelids to a NURBS modeled face. Here are the steps: 1. Select the blends, and then open the Attribute Editor. 2. Click the FfFilletSrfr? tab, and then click the Select button. 3. ChooseWindows GeneralControls attribute in the Channel box.
ChannelControl,anddisplaythisnode'sNodeState
You can now set the NodeState attribute to Blocking. By connecting the NodeState attribute to a custom integer attribute on one of your animation controls using Set Driven Key, you can control the display of all your blends at once. You can do the same for your trims by selecting the trimmed object, clicking the Trlrrm tab, loading that into the Channel box. and displaying the trim's NodeState attribute. However this time, set the NodeState attribute to Has No Effect, which basically turns off its evaluation, thus speeding up the interactive performance of your animation controls.
Polygons Polygon models are the oldest form of 3D surface, and most renderers (Maya's included) tessellate NURBS surfaces into polygons at render time. For a short time, polygon models fell out of favor with character animators because of their flat, faceted nature, but with the advent of SubDivision surfaces, polygonal modeling techniques are becoming more popular. Polygons are 3-, 4-, or n-sided shapes described by vertex position data. Connected, they form polygonal objects that can be shaded, textured, lit, and rendered. Polygonal modeling paradigms most closely resemble conventional sculpture, with figures being roughed out in low polygon mode and smoothed toward the completion of modeling. In Maya 3 and later, the polygon manipulation tools and animation performance have improved dramatically. Texture mapping polygon characters requires that you create UV maps and can rapidly become a tedious chore, but with skill, texture maps can make an otherwise unacceptably low polygon model into a visually interesting subject.
SubDivision Surfaces These exciting polygonally derived surfaces are the newest type of modeling paradigm. They offer the animation advantages of low polygon surfaces combined with the smoothness of highly tessellated NURBS surfaces, thus combining the ease of use of polygons with the
9
10
C H A P T E R 1 • Accelerating the Preproduction Process
Figure J.5: A flour sack modeled using each method. From left to right: SubDivision surfaces, NURBS patches, polygons, and NURBS trims and blends. organic quality of NURBS surfaces. They are expensive in terms of memory usage, but given the drop in the price of memory, this is no longer a problem. However, SubDivision surfaces are currently available only with Maya Unlimited (requiring a significantly greater expense than Maya Complete), and they are currently incompatible with Fur and Paint effects. Figure 1.5 shows a flour sack that was modeled using each of the modeling methods.
Using NURBS to Model In my animation classes, one of the first exercises we do involves animating flour sacks that jump, walk, and interact with a ball. In past courses, we have used polygon and NURBS types of flour sacks; currently we use NURBs. We use a flour sack because it is simpler to animate than a biped yet more complex than a beach ball. Of course, before modeling, I tried to think about what the flour sack might have to do. I had to model for a variety of motions because the final project for the unit is to come up with a ball/flour sack interaction. Consequently, my students need a wide latitude of motion to the sack. Figure 1.6 shows some of the preparatory drawings I used in planning for these motions. Building a flour sack is a useful first step in building a more complex character, because you can apply many of the modeling techniques to just about any character. In this instance, we stitch "hands" and "feet" onto our flour sack. I use NURBS patches for a number of reasons. First, from this finished model, you can easily tessellate the model into a polygonal model of varying detail. Merging vertices results in a single polygonal mesh. Or, you can also bind the model directly to a skeleton. Modeling with patches gives us many advantages, such as interactively tangent blends; Figure 1.6: Character sketches of the flour sack
Modeling Methods
11
instant, controllable smoothing; and surface attachability and detachability. Using the file floursackdismembered . m a on the CD, let's begin. One problem with patches is aligning the surfaces properly so that they join as smoothly as possible. Stitching edges and global stitching help a lot, but that won't make a perfectly aligned "silk purse" out of these completely incongruous "sows' ears" that we have now. I use the following method: 1. Replace end caps 2. Fillet blend between patch isoparms 3. Delete history on fillets 4. Rebuild fillets geometry 5. Attach fillet geometry to appendage patches 6. Global stitch In the steps in the following section, I start from the hotbox whenever I choose a menu command. Also, because you'll want to see the wireframe while working, choose Shading Shade Options Wireframe on Shaded if it is not already visible. The steps in the following sections assume some knowledge of NURBS modeling. Although we will model the flour sack in this case, these techniques apply to creating any patch model.
I created f l o u r s a c k d i s m e m b e r e d . m a by massaging NURBS spheres into shape for the body, the "hands," and the "feet." Figure 1.7 shows the settings I use when rebuilding after detaching surfaces. I detach and rebuild immediately, because it is sometimes easy to lose track when working with a patch model. All these surfaces have been rebuilt/reparameterized and are ready for action. Replace the End Caps
The top and the bottom of this flour sack have four patches that meet at the center. I prefer to work with one patch rather than four because it is easier to stitch and texture map. Therefore, I plan to create a single patch using the boundary tool. Here are the steps: 1. RMB each patch, and select isoparm at the base of each triangular patch as shown in the image on the left in Figurel.8. 2. Choose Edit Curves Duplicate Surface Curves to create the four curves we will use to form our patch. Because we create them from these existing patches, they touch at
Figure 1.7: The Rebuild Surface Options dialog box
Figure 1.8: The three-stage process for replacing the end cap using the Boundary tool
12
C H A P T E R 1 • Accelerating the Preproduction Process
their end points and are ready to have the Boundary Surfaces tool applied to them to create our patch. 3. Delete the triangular end patches, as shown in the center image in Figure 1.8. 4. Shift+select the curves in clockwise order, and choose Surfaces Boundary In the option box, go to the Edit menu, reset settings, and click the Boundary button. Press the 3 key to smooth the display of the object, and you should have what is depicted in the image on the right in Figure 1.8. Deselect Surfaces in the Select by Object Type button bar, and marquee select to select the curves and delete them. Now replace the end caps for the other side. 5. Save your work. Although these surfaces are flat now, they will blend in with the rest of the flour sack nicely when we stitch them later. Fillet Blend Between Patch lsoparms The next step is to fillet blend between geometries to create geometry that smoothly transitions between the hands, feet, and body. This is much easier than lofting, pushing, and pulling points into place. Here are the steps: 1. RMB-Isoparm one of the patches on the hand and a facing patch on the body. Shift+select the closest isoparms on each patch, as shown in image A in Figure 1.9. 2. Choose Edit NURBS Surface Fillet Freeform Fillet to display the options in image B in Figure 1.9. Click Fillet to see the result. 3. Go around the hand and create a blend for each patch, as shown in images C, D, and E in Figure 1.9. Don't be alarmed when the isoparms don't match; we'll rebuild them to match the hand patches shortly. 4. Now do the other hand and the feet the same way. We could have lofted between these isoparms, but blending allows us to adjust the CVs in the hand patches while the blends, by their interactive nature, adjust tangency to match. The X-ray image in Figure 1.10 illustrates this.
Figure 1.9: Blending the hand to the body
Modeling Methods
13
Figure 1.10: Editing CVs while the blends automatically remain tangent There is an ugly bend in the blend resulting from misaligned CVs in the bottom patch. To fix that, I grabbed the second to the last hull on the bottom hand patch as well as the last two CVs on the same hull on the adjacent patches (this makes sure that we don't pull our hand patches out of alignment) and pulled it out along the X and Y axes; the blend matched tangency. Delete History on Fillets These blends follow the shape of the hand and foot patches because they have a construction history and are fillet blends. Deleting History removes the construction history and makes these blends static geometry. Select the blends, and choose Edit Delete by Type History. Rebuild Fillet Geometry Now the blends are regular geometry, but their parameterization is wacky. The Attribute readings for them will look something like the upper image in Figure 1.11. This is a good example because we want the Spans for UV to be equal to match all the blend patches around the hand. If you need extra spans running around the hand, rebuild with three or more spans in that direction. 1.Choose Edit NURBS Rebuild Surface clear the NumSpans check box, enter 2 in the Number of Spans U field, and then enter 2 in the Number of Spans V fields. Click the Rebuild button, and you will see that the U and V spans are much more in line with what we want, as shown in the lower image in Figure 1.11. 2. Repeat this procedure for each blend. Select all blends at once and rebuild corporatelyto speed things up considerably.
Figure 1.11: Upper image: Fillet blend with Chord length parameterization. Lower image: Blend with parameterization that matches surrounding patches. Rebuild the blends after deleting history.
14
C H A P T E R 1 • Accelerating the Preproduction Process
Attach Fillet Geometry to Appendage Patches Now that our blends are rebuilt, we need to attach them to their respective hand patches. Follow these steps: 1. Select the hand patch, Shift+select the blend, and choose Edit NURBS Attach Surfaces Clear the Keep Originals check box if it is currently checked. The upper image in Figure 1.12 shows the two patches before applying the Attach Surfaces command, and the lower image shows the two patches afterward. Figure 1.12 also shows the settings used to attach the patches. 2. Repeat Step 1 for all the hand and foot patches. 3. Select all the patches, and choose Edit Delete by Type History. The four patches that now compose each hand and foot are close to tangent with the body patches and close to tangency with each other. We are now ready for the final step: Global Stitch. Figure 1.12: Attach surfaces before and after applying the Attach Surfaces command lf the hands and feet patches are way out of alignment, you should go back and attach the hand and foot patches laterallywith Attach method blend and no inserted knots to make one surface for each hand or foot. Close the surface, and then detach the patches along the original seams to ensure smoothness around the hands and feet.
Global Stitch Global stitching allows you to join patches together seamlessly over an entire character or object. Follow these steps: 1. To see Global Stitch in action, it is best to deactivate Wireframe on Shaded. Choose Shading Wireframe on Shaded. 2. Now, after you delete history on all the patches, select all the patches and choose Edit NURBS Stitch Global Stitch This step is tricky, but well worth the time to get right. Global stitching is cool. Not only does it automatically align all the patches, it is also easy to adjust because it is a single node applied to all the patches. Figure 1.13 shows the option window with the appropriate settings. By adjusting Stitch Smoothness, Max Separation, and Modification Resistance, you can create a smoothsurfaced sack. The big question is, do you select Normals or Tangents? Tangents will do everything possible to try to smooth between patches, even if it misaligns isoparms and CVs. If you can make Tangents work, select Tangents. Normals smooths the surface, but not as well. So let's set Stitch Smoothness to Tangents and click the Global Stitch button.
Modeling Methods
15
3. Deselect all the patches. Ugh! It looks as if the top of the sack is curdled! We need to fix this. 4. Select a patch as shown in the image on the left in Figure 1.14, and click globalStitch1 in the INPUTS section in the Channel box. You can adjust these settings to change the entire sack, which is convenient. 5. Click the Modification Resistance attribute name in the Channel box, and MM click, hold, and drag back and forth in the modeling window to adjust the attribute's value. Hold down the Ctrl key to increase the "fineness" of the adjustment if necessary You want this number to be as low as possible and still get a smooth stitch because this number controls the amount of Figure 1.13: The proper GlobalStitch settings resistance the CVs show to the global stitch. Global Stitch is a powerful, but stupid process. When Stitch Smooth is set on Tangents, it moves CVs wherever it wants to set up its version of what a smooth stitch requires, even if the surface buckles unusably. After setting the Modification Resistance attribute to a setting that gives us the smooth surface you want, you're done. 6. As a last step, select all the patches, and choose Edit Delete by Type History.
Figure 1.14: Before and after adjusting Modification Resistance values
=
16
C H A P T E R 1 • Accelerating the Preproduction Process
Patches are too far apart lf the patches start too far apart, there can be major problems as it appears that Mayaw!ll attempt to join not only the separate surfaces but also join and align CVs that are on the same patch if they are within the Max Separation distance of the stitch. It is best to start with patches that touch or are very close and that are already aligned. Accomplish this by edge stitching first and then deleting history. Misparameterized surfaces Trying to join surfaces that don't have the same number of U spans and/or V spans can be problematic. Make sure that all surface isoparms match both numerically and visually. Surface discontinuities The image in the upper left in the following graphic shows a surface discontinuitywhere five patches meet. Correcting this involves individually shift-marquee selecting the five corner and five pairs of tangency points where the patches meet (you want to make sure you get the multiple points, as shown in the image in the upper right). Shift+select the five points in between, as shown in the image in the lower left. Now, run ElieJamaa's fine Planarize plug-in (available on the accompanying CD-ROM). That should fix the problem, and the image will appear like that in the lower right of the following graphic. For those pesky, hard-to-adjust discontinuities that resist automated fixes, double-click the Translate/Move tool and select Normal mode. In this mode, you can push and pull CVs in the U and V directions. You can also drag them outward along the surface normal. Using Normal mode is the best way to align CVs manually. Pull the CVs perpendicular to the seam to align the patches. Work with CVs and Hull visible for best results.
Once you correct any problems and have a smooth sack, you are ready for the next step, texturing.
Texturing a Flour Sack You need to keep textures as simple as possible. Different modeling topologies make this task simple or difficult depending on what was used and how the character should look. In our flour sack example, a simple burlap texture scanned into the computer and made seamless in Photoshop is the best bet. (See b u r l a p s q . t i f on the CD). Connected to the bump channel in the Hypershade window, this texture can also serve as a bump map. Applied to all the patches of the sack, it appears seamless. The only problem is that the ears and hands patches, because they are so long and thin, can cause the texture map to stretch. However, for simple animation exercises in which the focus is on demonstrating the individual's animation skill, small texture imperfections should be no problem. Figure 1.16 shows the difference between a NURBS flour sack (on the left) and a medium resolution polygonal
Texturing a Flour Sack
Figure 1.16: Left image: A NURBS flour sack textured by using built in UVs. Right image: A medium resolution polygon flour sack textured with burlap applied with custom UV maps.
flour sack textured with the same burlap texture mapped with a custom UV map (on the right). Speeding the texture-mapping step is one of the biggest benefits of proper planning prior to modeling. When you predefine the overall look of the animation, texture mapping becomes easier. Achieving good textures on your subject is difficult enough without trying to define them while you work on the technical part of mapping. Every aspect of Maya gives you an enormous number of choices; it is extremely easy to get lost in the many ways of creating textures. If for no other reason than this, starting this process with a clear idea of what you want and learning how to obtain it is crucial. It is also important to keep a consistent textural style throughout an animation. If a character is textured naturalistically, it becomes the standard by which the rest of the textures in the piece are judged. Much like a drawing in which one part is worked to a high finish, the polished main character will look incongruous with the backgrounds if the rest of the textures are haphazardly thrown into the piece. Although it is entirely possible to play complex textures off simple textures to great effect, such contrasts do not often happen by accident. A consistent level of thought makes such composition devices work in an animation. NURBS surfaces and polygons require different methods of texture mapping. Each NURBS surface, whether a separate shape or a patch, has an implicit U and V direction and is square in shape. All textures applied to NURBS surfaces stretch to fit that shape. Although
17
18
C H A P T E R 1 • Accelerating the Preproduction Process
this feature makes NURBS shapes easy to map, it makes multiple patches difficult to map consistently over the entire surface of your model. For all practical purposes, mapping multipatch models with images fitting across their surfaces requires a 3D paint program specifically designed for that purpose. As such, texturing such surfaces is beyond the purview of this chapter. Polygons provide a much better example that can be textured from within Maya (and Photoshop, of course). Now let's convert the NURBS surfaces to a polygonal mesh. Follow these steps: 1. Converting the NURBS sack to polygons is simply a matter of using Modify Convert NURBS to Polys using Quads, General Tesselation Method, and a setting of 1 for the U and V direction under Per Span # of Iso Params. 2. Click Tesselate to create a polygon flour sack. 3. Choose Edit Delete by Type History. 4. In the UV Texture Editor window, delete the UVs that were brought over from the NURBS to polygon conversion and remap with your own map. I prepared two files that contain UV maps for this polygonal flour s a c k . S a c k U V o n e . m a and s a c k U V t w o . m a (available on the CD) present two different methods for arranging the UVs for this sack. I created them by planar projecting the front and back polygons from the front camera view. I flipped the back UVs, moved them beside the front UVs, and moved them both out of the way in preparation for the next group of polygons to be mapped. I projected the top and side polygons in the same way. I created the UV map for the hands and feet polygons using automatic mapping, and I sewed them together to create the maps found in the files. These two UV maps differ only in the way the polygons are sewn together to create the fabric of the sack. When creating UV maps, be sure that adjoining edges are the same size. They are the same edge, but they exist at two different coordinates in the flat UV map. If one edge is longer than the other, there will be an irreconcilable seam where the two disparate edges meet in the model, as shown in Figure 1.17.
Setting Up for Animation
Think about where these seams should line up. This flour sack will be made of some kind of cloth or paper (in this case, burlap), so it will be tremendously advantageous to think of these patches as patches of cloth that will be sewn together and then filled with flour. Letting that thought guide me, I created s a c k U V t w o . m a and used it to create the textures. I select the flour sack, open the UVTexture Editor window, and choose Polygon UV Snapshot. Now I can export the image in the texture window as an image to import into Photoshop. I recommend a size of 1024 on U and V. Turn off anti-aliasing, and make the line color white. I recommend Targa or Maya iff for the export; Targa works in Photoshop with no extra plug-ins. Figure 1.18 shows my Photoshop environment. I floated the lines from the UV snapshot by painting the alpha channel in white onto a transparent layer. I then lassoed a section of burlap and dragged it into a layer sandwiched between the white line layer and the background. As I complete each piece of burlap, I merge it down onto the background. I use Photoshop's Rubberstamp tool to remove any seams where I have pasted two pieces of burlap that don't quite match naturally. I allow the texture to overlap the edge of the UV patch because I will almost surely have to tweak the poly UVs after I apply the texture and smooth the polygons. Figure 1.18: The Photoshop environment I have included a flat burlap square seamless texture on the CD ( b u r 1 a p s q . t g a ) . I created it by scanning a piece of burlap and making it seamless using Photoshop's Clone tool. I have also included the b u r l a p c U V . t g a file that is the finished UV mapped color texture created earlier in this chapter. It applies seamlessly to the sack in s a c k U V t w o . m a . From this file, you can derive a bump map by either connecting it to the bump channel in the Hypershade window or editing it in Photoshop to create a more custom look. I also included a file called C o l o r M a p . t g a , which corresponds to the map in s a c k U V o n e . m a . Use it as a guide to mapping that file. With texturing done, it is time to move to setup. However, it isn't necessary to wait until texturing is complete to move on. If you want to, you can set up the original and create a UV map of a duplicate copy. Then, when finished, you can choose Polygon Transfer to copy the UV map to the setup original.
Setting Up for Animation I suspect that many animation projects are "rotting" on hard disks everywhere. I further believe that these projects are hung up at the setup stage because of the dizzying array of options that 3D programs such as Maya offer to the technical director: Smooth vs. Rigid skinning, Direct vs. Indirect skinning, Lattice vs. Cluster deformations, and Forward vs. Inverse kinematics are just a few of the many setup techniques available. What's that you say? You are an animator! That may be true, but technical directors (TDs) usually handle character setup, lf you are doing all the work for your production, congratulations! You are now a TD!
19
20 C H A P T E R 1 • Accelerating the Preproduction Process
Figure 1.19: These ballpoint pen sketches are only a little more involved than the gestural sketches from earlier in the chapter, yet they provide good examples of the extreme poses this character will need.
Setup is arguably the longest phase of the animation process (prior to actually animating). You can take a number of steps to shorten this process, but ideally the person doing the setup needs to know what the character is going to do. Given the fluid nature of the animation process, this ideal is impossible to fully achieve, but because a good setup speeds the rest of your work, it is more than worth putting effort into the front end of the process. Specifically, drawings or other indications of what the character must do that include the most extreme poses give the best indication of what the TD will need to set up. Figures 1.19 and 1.20 are examples of these types of drawings. In a professional production pipeline, story, character design, and concept changes during the production process make absolute definition of what the character must do impossible; so the character TDs must set up for a wide latitude of motions. The TD, however, is responsible for the process from beginning to end, so having complete knowledge of what the character must do is indeed possible. That said, three principles can speed the setup process, and anyone interested in "going it alone" and producing an animated short themselves or anyone setting up a character for someone else would do well to be familiar with them. As with anything associated with animation in general, or Maya in specific, these principles are general concepts and can be implemented as needed.
Have a Plan and Stick To It Have a method of attacking setup and follow it carefully. Adding or changing methods midway through the process is frustrating, and frustration is lethal to the animation process. Resist the urge to believe that the setup grass is greener and use another method that requires you to start from scratch or from many steps backward. Each method or technique has its own set of features and difficulties that materialize at different times in the setup process. Giving in to self-doubt leads down a path of frustration. Conventional wisdom says that different skinning methods are good for different types of characters. Some say that smooth skinning works for single-skin polygonal characters and that rigid binding works for some other types; but I would like to suggest that skinning is more a matter of personal skills and temperament than intrinsic value. If a person is a great digital painter and has a great deal of patience with painting, using smooth skinning is a good way to set up just about anything, be it NURBS or polys. Similarly, if the TD understands set membership and lattice deforming as a way to correct joint influence problems, rigid binding can work wonders with polys or NURBS. A good modeler might choose blendshapes for facial animation and muscle deformation, while the more technically bent might choose to animate the face with joints and/or clusters controlled with a set driven key technique. Advantages and disadvantages are associated with every setup method; there is no silver bullet.
Setting Up for Animation
21
Figure 1.20: These drawings also show extreme poses, but with the added dimension of form. I scanned some ballpoint pen gestural thumbnail sketches into the computer at a high resolution, resized them, and printed them lightly on toned paper. I then reworked them with white and black colored pencil and outlined them in black ink. The idea behind this method is to add the illusion of form and volume without losing the gestural quality of the sketch.
Set Up Only What You Need to Carry the Story This should seem obvious, but judging from the questions I see on many listservers I monitor, many artists try to include too much in their rigs. In the setup phase, re-evaluate what you need your character to do. Do all the fingers really need to bend and curl independently, or can they all be bent with one control? (See Figure 1.21.) Are muscle bulges really necessary to move the story forward? Does your character need to deform so that it looks like it is breathing? Having a clear idea of what your character needs to be able to do is the best setup tool. If the purpose of your animation is to show off your ability to create an effective rig, by all means, put in as many controls as you have time for. You are in effect stating that your ability to rig is the story you want to tell. Use all the bells and whistles to create as many features as you desire; you will be furthering the story by doing so. If animation is your focus, rigging unnecessary features that no one will ever see wastes time that could be spent enhancing your animation. Because the thrust of this chapter is preparation for
Figure 1.21: Why make the fingers able to bend individually when all you will need for your animation is a single bend control? Which do you think will be easier to animate?
22
C H A P T E R 1 • Accelerating the Preproduction Process
character animation, I want to stay focused on simpler control, which brings us to our next principle.
Set Up for Speed of Animation, Not for Character Features The need for speed is inherent in animation setup. The faster everything moves, the more you can concentrate on animation; everyone knows that. But also, the faster you can select and move the animation controls, the faster animation can actually happen. After you decide which controls and features you need to set up to help tell the story, you need to make sure that these controls and features are easily accessible, movable, and keyable. The first step is deciding what needs to be automated and what the animator (you) must be able to control. You should never see some features, such as modifications that keep the mesh from collapsing during joint rotation. Some features may enhance the appearance of the model as it is animated, but they are so small that the audience will not notice them unless the camera focuses on that part of the character. You can automate other controls, but the animator should really control them. Knowing which is which is essential.
Do Not Let "Perfect" Be the Enemy of "Good Enough" Animators often spend enormous amounts of time tweaking the setup by tuning each vertex or CV in the Component Editor, polishing controls, creating clusters for every little muscle twitch, and generally worrying over every possible problem that could arise. They spend so much time that the animation phase suffers, and the entire animation winds up being less than it should be. The counter to this and many other problems is to remember that even as a wellanimated piece of geometry will transcend the modeling and imaging methods used to model and texture it, so will strong, convincing animation transcend any setup imperfections. A good performance trumps distorted models every time. Therefore, the TD/animator should set a time limit on setup, begin with the overall features such as skin weighting and IK controls, work outward to finger bends and facial features, and finish with details such as muscle flexing if time permits.
Creating an Arm Now let's put some of these principles in action using a well-muscled human arm. We'll work first on the wrist. I use a system of NURBS boxes (created with C o n t r o l B o x . m e l on the CD and shaped by translating CVs) as controllers for the body bones (using FK), arms and legs (using IK), facial expressions (using SDK—set driven key—blendshapes), and fingers (using SDK). I find that using point constraints for the arm IK combined with an orient constraint for the wrist bone allows the hands to stay locked in place when the body moves. The problem is that connecting the elbow to the wrist directly causes the mesh around the wrist to collapse when the wrist rotates around the X axis too far. Correcting this deformation is difficult; weight painting can only take you so far. An elbow, ulna, and wrist arrangement provides a way to transfer the x-rotations of the wrist to the short ulna providing a way for a lattice to correct the collapsing forearm while allowing the wrist to rotate freely in the Y and Z axes. What we'll do is bind the lattice to the elbow and ulna while constraining the rotation of the wrist to the Wrist control box.
Creating an Arm
The steps In the following section assume that you know how to use Smooth Bind and paint weights to create relatively smooth joint deformations.
Finish Setting Up the Arm Bones Open A r m S e t u p S t a r t . m a on the CD, and you will see the skeleton created by snapping the joint placement to a high-resolution grid. All the joints are perpendicularly placed. The CSpine joint is a fork-shaped bone that is actually in line with the rib cage joint, but it has a spur that connects to the collar. The shoulder has a spur that connects to the deltoid joint. This spur bone provides a clusterlike control for that large mass of geometry corresponding to the deltoid muscle (see Figure 1.22). We'll paint these weights to assign these vertexes to the joint. To correct a collapsing wrist mesh, follow these steps: 1. From the top view, rotate and scale (do not translate) the shoulder and elbow joints into position. This keeps the axes pointed in the right direction. Really, that is the key to joint placement: it isn't important for any particular axis to point down the bone, but we don't want our axes to get out of alignment. Rotating and scaling joints into position prevents that from happening. We will still have to adjust some of our local rotation axes before we can constrain our joints to the NURBS control boxes anyway.
Figure 1.22: The deltoid muscle CVs selected
23
24
C H A P T E R 1 • Accelerating the Preproduction Process
Figure 1.23: Adjusting the local rotation of the Cspine joint 2. Select the root joint, in this case the rib cage bone, and type select -hi at the command line. (You will want to make a shelf button out of this command by drag selecting it and MM dragging it to the shelf.) This selects the entire hierarchy of joints. 3. Choose Skeleton Set Preferred Angle to set the bones in their proper place; this will be the angle that Maya's IK solver starts to solve from. More important, it sets the plane described by the placement of the shoulder, elbow, and ulna bones in place, providing a direction for the Pole Vector to point. 4. Select the Shoulder control box and the CSpine joint (in that order), and go to Component Selection mode. 5. RM the Question Mark button, and display the local rotation axes. Select the axis for the CSpine joint and rotate it to match the Shoulder control box, as shown in Figure 1.23. To be as accurate as possible, work from the front view and increase the size of the manipulator by pressing the plus (+) key. (Pressing the minus (-) key decreases it.) 6. Press F8 to switch into Select by Object Type mode, and then choose Constrain Orient to constrain the CSpine joint to the control box. Now it will rotate with the Shoulder box. We can do this immediately because we originally selected the box and then selected the joint to be constrained. 7. Snap the HandControl box to the wrist joint by selecting the HandControl box, holding down the v key, and moving the box to the wrist joint. It should snap to the wrist. 8. Freeze transformations on the box, and adjust the local rotation axis of the wrist to match the box. We freeze the transformations on the control boxes so that we can zero out any transformations and return the skeleton to the bind pose. 9. Click the HandControl box. Shift-select the wrist joint, and orient constrain the wrist to the HandControl box. 10. With only the wrist joint selected, open the Attribute Editor and clear the X axis check box under Joint: Degrees of Freedom to lock it in place. 11. With the Attribute Editor still open, select the ulna and check the Y- and Z-axes check boxes under Joint: Degrees of Freedom to lock them into place.
Creating an Arm
25
12. In the Channel box, right-click the Rotate X attribute to open the Expression Editor. We could use the Connection Editor, but because we would have to use an expression for the right hand if we were setting it up anyway, doing it for this side gives us symmetry, which is clearer. 13. In the Expression Editor, type U 1 n a . r o t a t e X = H a n d C o n t r o l . r o t a t e X * l . For the right side, multiply H a n d C o n t r o l . r o t a t e X by -1 to make the hand rotate correctly. The HandControl now rotates the wrist joint in Y and Z while the Ulna rotates in X. Because we used an orient constraint to control the wrist, it maintains its orientation no matter how the upper arm moves.
Create the Arm IK To create the IK and Point constrain it to the HandControl, follow these steps: 1. Using the Rotate Plane IK (ikRP)solver tool (available in the IK Handle Tool option window), select the shoulder first, and then select the ulna to generate the IK handle. (Notice how the Pole Vector points straight out the back of the shoulder. We'll use this later.) 2. Choose Window Hypergraph, and select the effector that is connected to the shoulder joint. This is quite different from the IK handle itself. The effector is usually hidden when you are working, but we will need to move its pivot point and snap it to the wrist. 3. With the effector itself selected, press the Insert key, press and hold down the v key, and then move the effector to the wrist. Notice that the IK handle moves with it; this is good because we are going to constrain the handle to the HandControl box with Constrain Point. Point constrain the handle to the HandControl box. Press the Insert key again to jump back to editing the object. If you have to, press and hold down the v key and snap the HandControl box to the wrist joint (and the IK handle) and freeze transformations. 4. Shift+select the IK handle, and choose Constrain Point to lock the IK to the HandControl box. 5. Select the ElbowPointer control box, Shift+select the IK handle, and choose Constrain Pole Vector. Now you can rotate the plane of the IK solver by moving the ElbowPointer control box. More important, moving the Pole Vector constraint fights the flipping of the IK plane that occurs when the IK handle moves past the base of the IK chain. 6. Move the ElbowPointer box directly behind the shoulder joint, parent it to the Shoulder control, and freeze transformations. Keeping the pole vector constraint object directly behind the shoulder makes keeping the IK plane aligned with the arm much easier. Moving the ElbowPointer control box up and down moves the elbow independently of the shoulder and wrist (see figure 1.24).
Figure 1.24: The elbow now moves independently of the shoulder and wrist.
26
C H A P T E R 1 • Accelerating the Preproduction Process
Bind the Arm Now let's move on to binding. Many animators believe that binding should occur before constraining. However, I've found that if I need to make any drastic changes to the binding, disabling nodes, detaching, fixing, and going to bind pose resets all the local rotation axes, forcing me to manually reorient these axes. Also, because I'm using IK instead of FK, I want to weight using rotations coming from IK rather than FK; so I constrain before binding. Also at this point, I would delete the hand joint and create a hand, but because we're looking specifically at the forearm, we don't need to bother. Before binding, be sure that your geometry is clean—that it has no construction history other than blendshapes. Most important, be sure that you have removed any Polymeshsmooth nodes in your construction history. These nodes specifically prevent the successful removal of any points from the smoothskin cluster we are about to create. Failure to remove any Polymeshsmooth nodes causes the mesh to tear apart. It's always a good idea to start with clean, history-free geometry before binding. Now let's bind the geometry to the skeleton. Follow these steps: 1. Open the Outliner so that you can see selections as you make them. 2. Select the top joint in the hierarchy (the rib cage), and then select hierarchy (select -hi;). 3. Click and hold the Skeleton Layer in the Layer bar. Select Standard to allow you to select the skeleton. Untemplate the PolyArm layer, and then Shift+select the polygonal arm mesh. Choose Skin Bind Skin Smooth Bind Bind to Selected Joints. The other settings in the options box should be set to: Bind Method:Closest Joint; Max Influences :2 and Dropoff Rate:4.0.1 use Max Influence with a value of 2 because 1 find that limiting skin influence to two joints at binding time makes weighting more manageable later. Click Bind Skin to skin the mesh. Ninety percent of the time, I bind only selected joints to a particular mesh. For segmented characters (which are how most characters are modeled), it is important that those mesh objects be bound only to the necessary joints. This is faster and easier to weight than binding these meshes to the entire skeleton.
3. If you move the HandControl box, you will see that the mesh doesn't look too bad! We will need to fine-tune the shoulder and the elbow joint, but that is relatively easy to do. I like to set a key at an extreme pose for the wrist so that I can scrub through the timeline and see how the mesh deforms in motion. Another great timesaver is to use the Attribute Collection script (available on AliaslWavefront's website—www.aliaswavefront.com—or on www.highend3d.com) to create sliders to move the HandControl box while you are weighting. 4. Let's set a key at frame 1 and move the HandControl box to an arm bent position at frame 10 as shown in Figure 1.25. Copy the key from frame 1 and paste it at frame 20 to take the arm back to its bind pose. You can now paint weights on the shoulder and elbow joints. The elbow joint is fairly easy to tweak, but we are going to concentrate on the wrist. If you rotate the wrist in X, you can see that rotating the HandControl too far in X distorts the mesh badly. If you know absolutely that you will not need to twist the hand so far that the mesh collapses, you can skip the next section and move on to the biceps. Otherwise, let's correct that weight.
Creating an Arm
Figure 1.25: The arm is bent at frame 10.
Correcting the Wrist Any setup correction should require that the TD do the minimum necessary to fix the problem. If you can fix the distortion acceptably by painting weights, stop there. More than likely, however, you will need to add a lattice. Basically, you can fix the Y and Z rotations with painting. You will most likely find that even minimal wrist rotation on the X axis will cause the mesh to collapse; this cannot be fixed by painting. It's possible to spread out the deformations, but the mesh still collapses unacceptably. With a lattice, we can not only spread out the deformation, we can key the tweak node of the lattice to expand and negate or even reverse the tendency of the forearm to collapse. To create and weight the wrist lattice, follow these steps: Choose Edit Paint Selection Tool, and then paint a selection like that shown in Figure 1.26. Choose Deform Create Lattice and create a lattice with 7 divisions in S, 3 in T, and 3 in U, as Figure 1.27 shows. We have extended the selection just past the ulna. We now need to remove the points that are in the lattice from the overall skin cluster of the body. Choose Window Relationship Editor Deformer Sets, click the ffd set in the left window, choose Edit Select Set Members to select those points, click the SkinCluster set in the Relationship Editor window, and press the big minus button at the top of the window to
Figure 1.26: The forearm CV selection
27
28 C H A P T E R 1 • Accelerating the Preproduction Process
remove the selected points from that set. We do this to avoid double transformations once we bind the lattice to the elbow and ulna joints. 3. Shift+select the elbow and ulna joints, select the lattice itself, and choose Skin Bind Skin Smooth Bind. Rotate the HandControl -90 degrees in X, and you will see that the deformation, while better, is not perfect because Figure 1.27: Creating the forearm lattice the majority of the points are closer to the elbow joint. They are influenced too much by that joint. We have to adjust the weight of the lattice points themselves. The good news is that this is much easier to do than it sounds. Because we are only working in X, we can adjust all the points in each row at once with the same weight. 4. Turn off Curves, Joints, and Surfaces in the Set Object Selection Mask button bar. RM the lattice, choose Lattice Points, marquee select the row of lattice points closest to the ulna, and then choose Window General Editors Component Editor. Click the SkinClusterSet, and you can see that the influence on these points is split almost evenly between the elbow and the ulna. We need the influence for this row to be 1 for the ulna and 0 for the elbow. 5. Weight the rows so that they smoothly twist toward the back row, which should be weighted to 1 for the elbow joint. Notice that the mesh still shrinks, but it is smoother than it was. The last thing we will do to complete this process is to key the shape of the lattice itself to the X-axis rotation of the HandControl
Figure 1.28: Upper image: The wrist before lattice creation and rigging. Lower image: The wrist after connecting the tweak node of the lattice to the x-rotation of the wrist controller using set driven key.
Creating an Arm
by connecting the X, Y, and Z positions of the lattice points to the positive and negative xrotational values of the HandControl box. To connect the lattice points to the rotations of the HandControl box, follow these steps: 1. Reset the x-rotation on the HandController to 0. 2. Click the Rotate X attribute of the HandControl, and RM it to open the Set Driven Key window. Click Load Driver. 3. Right-click the lattice, select Lattice Points, and marquee select all the lattice points. Click Load Driven. 4. Drag select xValue, yValue, and zValue on the right side of the Driver window. Click rotateX value in the Driver window, and then click Key. 5. Rotate the HandControl box 100 degrees in X. We want to go past the highest x-rotational values we will use (usually 90 and -90). If the deformation looks good at that point, it should look good at all values leading up to it. 6. Scale, rotate, and translate the points of the lattice so that the forearm shape is pleasing. Key it in the SDK window, and rotate it back 100 degrees in the other direction. Adjust the points, key it, and you're done! You now have a forearm that allows for isolation on the hand and x-rotation without wrist collapse, as shown in Figure 1.28.
Create the Biceps Muscle Moving on to the biceps, we confront a question of setup philosophy. Many artists believe that you can cluster the biceps and create a flex for it based on the rotation of the elbow joint using SDK or expressions, but this ignores human physiology. The animator will need to control the amount of flex in the biceps because how much it flexes reveals a great deal about the character's state of mind by depicting how much force the character exerts when flexing. Although I am a minimalist as far as animation controls, z/the biceps is to flex, especially with a hugely muscled character, the animator should be able to control that flex. B i c e p f l e x . m a has the same file as the arm model we rigged earlier with the addition of a bicep cluster as well as a custom attribute called Bicepflex added to HandControl. To weight the biceps cluster, follow these steps: 1. With the biceps cluster selected, Shift+select the shoulder joint, and parent the cluster to the shoulder. 2. RM the arm mesh, and choose Inputs Complete List. 3. MM drag the Cluster node down below the Skin Cluster node to ensure that the node is evaluated first. 4. Set a key for the cluster at frame 1. Move the cluster outward from the shoulder joint to however high you want the biceps to flex at frame 10. (See the top image in Figure 1.29.) This will not look right because the entire cluster will move uniformly. We will have to weight the cluster points so this deformation looks correct. 5. Painting the cluster weight so that the biceps looks like a biceps when the cluster is translated involves selecting the arm mesh and choosing Deform Paint Cluster Weights Tool. 6. Go to frame 10, flood the cluster with an influence of 0 (see the middle image in Figure 1.29), and set a key. Using Replace as the painting mode with a very high value and very low opacity, slowly build the shape of the cluster by adding influence until it looks right (see the bottom image in Figure 1.29). Smooth as much as needed. I scrub
29
30 C H A P T E R 1 • Accelerating the Preproduction Process
back and forth to see how the biceps will look when the cluster flexes. 7. Move the time slider to 1, and delete the keys on the cluster. This is important because I have found that trying to create set driven keys with keyframes in place can mess up the SDK itself. 8. Connect the cluster's Translate Z attribute to the HandControl's Bicepflex attribute using set driven key to get a good flex. Do not be afraid to experiment with applying SDK to the scale values on the cluster to fine-tune it even more. 9. When your setup is complete, and if you are working on polygons, choose Polygons Smooth to create a smooth skin for your character. You can connect the Divisions attribute to a custom attribute elsewhere on your character to interactively smooth your mesh at render time. If you do this before the end of the process, specifically before you create the lattices for your wrist, your mesh will tear apart. You can take this process further: you can create folds of the wrist as the hand rotates in Y and Z using clusters and SDK. You can also connect the bump depth of a bump map that brings up veins in the arm to a custom attribute called "vascularity" or some such using SDK. You can rig the deltoid and triceps muscles, not to mention flexing the individual muscles of the forearm. As you can see, the potential is endless; unfortunately, your time is not. You can extrapolate the techniques I've covered and apply them to the rest of the body to create a fully rigged character. The biceps-weighting technique works for eyelids as well as facial features. By painting weights, applying lattices, and manipulating sets, you can correct almost any deformation problem on the body.
Control Placement
Figure 1.29: Top image: Translate Cluster along Z. Middle image: flood/replace Cluster weight with influence of 0. Bottom image: paint cluster Weight back until the biceps deformation looks correct.
Although efficiency with the number of controls you create is a virtue; placing them for quick manipulation is essential. Minimize the extraneous clicking of the mouse or movement of sliders, which slows the animation process. A well-thought-out set of character controls will maximize the number of vital movements that can be manipulated on each control. Open
Blobbyman Comes to Life
Figure 1.30: The outliner shot
B l o b b y m a n .ma from the CD to see an example of what I mean. Open the Outliner and click the Translate node. Expand the Translate node completely by Shift+clicking the plus sign next to the name to see the entire control structure laid out for ease of use. (See Figure 1.30.)
Blobbyman Comes to Life B l o b b y m a n .ma contains an unbound human figure accompanied by a skeleton and a partially rigged control structure. You can use this file to apply some of the techniques I've discussed and finish the rig. All IK on both the arms and feet are finished. I have also made forward kinematic controls for the lumbar and spinal bones by adding custom attributes to the HipControl. Let's look at some of the individual controls and what they do. Translate This is the "uber" group by which you can move the entire animation to a different point. For example, if you animate Blobby walking in a particular direction, by rotating Translate 90 degrees, you can shift the entire direction of the walk. Basically, if you want to "pick up and carry off" the entire animation, you use Translate. Character Character node is the top of the "keying" hierarchy. Select Character, select the hierarchy under it, and press s to set a key for every control under it. Using the arrow keys, you can quickly navigate to Character for fast key setting. To add any custom SDK controls (say to adjust a polysmoothface node, for example) add them to the character node.
31
32
C H A P T E R 1 • Accelerating the Preproduction Process
CenterofGravity Often shortened to COG, CenterofGravity is the primary movement control for the entire body. Although you can use COG to move the body in XYZ, leave the rotations to the following controls. HipControl This control is for rotating the hips, lumbar, and spinal bones only. Resist the urge to unhide and unlock the movement attributes because it is best to separate moving from rotating where the body is concerned. Manipulate the spinal column by clicking the various custom attribute names and MM dragging to rotate the LumbarLocator and SpineLocators, which drive the lumbar and spinal joints. These locators are placed in the hierarchy to establish a parental hierarchy with the Shoulder and Head controllers. ShoulderControl Controlling the rotation of the shoulders, ShoulderControl is parented to Spinelocator so that it rotates with the bone it controls yet remains part of the controller, rather than the joint, hierarchy. It has four custom controls that allow the shoulder to shrug and rotate forward and backward independently. HeadandNeckControl Through an orient constraint, this control rotates the neck joint but contains custom attributes to rotate the head itself. It is entirely possible to control this rotation by connecting the rotation of the head bone to the rotation directions of this control using set driven key, but that would disallow counter rotation of the head and neck. I use this control to hang facial expressions for blendshapes, for posing a facial rig using SDK connected to custom attributes, or for controlling clusters for eye blinks. R & LhandControl These controllers move and rotate the hand at the wrist. As mentioned earlier, the controller rotates the wrist joint in Y and Z while controlling the ulna joint in X through a simple expression. This box also contains custom attributes that bend and rotate the fingers. I have completed the creation of finger controls on the left hand with SDK; it is up to you to do the right! R & LankleControl These controls move and rotate the entire lower leg. Point the knees by moving the L & RkneePointer boxes. Lift the foot by raising the ToeControls. The very bottom of the box is flush with the bottom of the foot to make placement on the floor easy to see. I am a proponent of the pose-to-pose method of animating; I find that animating by creating poses first and then adjusting timing allows me to leverage my drawing ability into effective animation. This particular structure of controls allows me to navigate throughout the structure easily using the arrow keys. Also, by navigating to the Character node, invoking the s e l e c t - h i command (which I have made into a shelf button), and pressing the s key, I can key every structure in the hierarchy at once. Animate this character by setting a key on the Character hierarchy and then moving CenterofGravity. Next rotate the hips, spine, shoulders, and head. Then work on posing the feet and hands. By working from the hips outward, you minimize the amount of useless clicking and/or corrective animation you will need.
Summary
Summary Animation does not happen by accident; it takes a huge amount of work, but the job can be made easier with proper planning. Unfortunately, being able to plan effectively requires some knowledge of what Maya can do, and in this chapter I have given you some information about Maya's capabilities along with some ideas of when and how to use them. Having a clear idea of what you want to say with the animation of your character is perhaps the single most important tool in your animation toolbox. By always keeping in mind what you need your setup to do when you get to the character animation stage, you will not only save time when creating and setting up your character, you will animate more efficiently when the time comes, giving you more time to actually animate, rather than fight technical hurdles.
33
Modeling a SubDivision Surface Horse Peter Lee
we'll build a horse using SubDivision surfaces. First, we'll create the basic shapes in NURBS. Next, we'll convert the shapes to polygons and then to SubDivision surfaces. As we proceed, you'll see that almost all the modeling work is actually carried out with NURBS and polygons, with only the finishing touches done in SubDivision surfaces. Many of the most important aspects of creating clean SubDivision surfaces are actually a straightforward application of simple polygon- and patch-modeling techniques.
Creating Basic Shapes with NURBS Let's start with a reference picture. Create a simple polygon plane made of a single face, make it face the X axis (the side view), and then bring in the image of the horse shown earlier in this chapter, which is c h 2 h o r s e _ r e f . t i f on the CD-ROM, as a texture file. You could use an image plane for this purpose as well, but I usually prefer to use the poly plane for several reasons. It's easier to adjust the ratio of the poly plane to match the picture, scale it up or down, and place it in the world space. You can also adjust the transparency of the poly plane or its brightness. Because the c h 2 h o r s e _ r e f . t i f picture is 1200x1000, let's adjust the ratio of the poly plane to 1.2:1 to map the picture of the horse to it without any distortion. Now let's shape the head and the body. Choose Create NURBS Primitives Cylinder. Assign a Lambert material, and make it about one-half to two-thirds transparent. Turn it sideways, and set its attributes to 8 sections and 6 spans to begin with, as shown in Figure 2.1. We want to start with the simplest shape to make it easier for us to model the general shape of the horse; the fewer CVs to work with, the better. Select the hulls of the cylinder, and roughly shape the horse's body by translating, rotating, and scaling the hulls in the side view. (Hint: press the 3 key to view the NURBS surface in high detail.) When working from the side view, don't worry about the shape of the horse from any other views. Concentrate on getting close to the horse reference picture from the side. As
36
C H A P T E R 2 • Modeling a SubDivision Surface Horse
Figure 2.1: The cylinder is placed with the horse reference picture.
Figure 2.2: Extra isoparms are inserted for the horse's body and head areas.
the shaping process becomes more detailed, insert more isoparms in the cylinder by pickmasking isoparm, dragging the mouse to the place where you want to place the new isoparm, and then choosing Edit NURBS Insert Isoparms. Figure 2.2 shows the intermediate shapes. Create two more cylinders, set the attributes to 8 sections and 5 spans, and shape the legs in the same way that you created the body. Again, don't worry about how the image looks from any view other than the side view. The legs need only to roughly conform to the horse reference picture for now, as shown in Figure 2.3. Once the horse shape roughly conforms to the reference picture, shape the horse in other views as well. Move the legs to the left side of the body. Scale the hulls of the cylinder body to make the horse's head and neck areas generally circular but vertically elongated and to make the horse's body area more circular, as shown in Figure 2.4. When the rough shaping is done, choose Edit NURBS Detach Surfaces to cut the body in half as shown in the image on the right in Figure 2.4. This will reduce the task of modeling the details to only one side of the horse. After we convert the NURBS to poly mesh, we'll join the legs to the body. To properly merge the legs to the body, the eight sections of the leg need to line up with the appropriate sections of the body. Place additional isoparms in the body, as well as in the legs, and shape them to more closely follow the reference picture, as shown Figure 2.5. (Keep in mind that the surfaces should be a bit larger than the reference picture; they will shrink in size when they are converted to SubDivision surfaces.) The selected surface patches shown on the right in Figure 2.5 will become a hole after the poly conversion takes place, and the eight sections of the leg will merge with the two sections of each of the four sides. Notice that at this point, the half body in the picture has UV spans of 16 and 6, the front leg has UV spans of 11 and 8, and the back leg has UV spans of 10 and 8. Building the hooves is simple. Create a cylinder with two spans, and shape it like the image on the left in Figure 2.6. Select the bottom edge isoparm, and duplicate a curve from it. Apply Modify Center Pivot to the curve and scale it to zero. (The curve shown on the right in Figure 2.6 has been scaled to 0.2 for visual reference only.) Loft from the edge isoparm to the curve to create the bottom part of the hoof. You can select isoparm using pickmask while you're in object mode.)
Conversion to Polygons
37
Figure 2.3: The horse's legs are roughly shaped.
Figure 2.4: The horse is shaped from front and top views, and then cut into half.
Figure 2.5: More isoparms are added to the horse.
Figure 2.6: Model the hoof using a simple cylinder, and use loft to create the bottom part of the hoof.
Conversion to Polygons To convert the NURBS body to poly mesh, follow these steps: 1. Select the NURBS body. 2. Choose Modify Convert NURBS to Polygons 3. Set the Type to Quads, and set Tesselation Method to General. Set U Type to Per Span # of Iso Params, set Number U to 1, set the V Type to Per Span# of Iso Params, and set Number V to 1. 4. Click Tessellate to convert the NURBS body to poly mesh, as shown in Figure 2.7. 5. Now repeat steps 1 through 4 for the legs. The hooves can be left as NURBS. To join the different parts of the horse that have been converted to polygons, follow these steps:
38
C H A P T E R 2 • Modeling a SubDivision Surface Horse
Figure 2.7: The horse is converted from NURBS to Polygon. 1. Select all the faces of the horse's body area where the legs will join the body, four for the front leg and four more for the back leg, and delete them. 2. Select the two legs and the body, and then choose Polygons Combine to turn these objects into one poly mesh object. 3. Check the direction of the normals of the new object to make sure they are all pointing outward as shown on the Figure 2.8: The horse's body parts are combined, with all left in Figure 2.8. (Choose the normals pointing outward, and the border edges of the Display Polygon Compobody and the legs are merged. nents Normals.) If some normals are pointing in, reverse them by choosing Edit Polygons Normals Reverse with the Reverse and Propagate setting selected. 4. To merge the edges of the legs and the body, choose Edit Polygons Merge Edge Tool, as shown in the top right image in Figure 2.8. When the merging is finished, the whole body should look like the lower right image in Figure 2.8.
Modeling Details
39
Modeling Details Now that we have a rough poly mesh from which to start our detailed modeling, the saying that the devil's in the details applies. On the one hand, the amount of work you put into the details will determine the quality of the resulting horse. You can cheat using bump and/or displacement maps, but that will get you only so far. Most of the horse's details still have to come from its actual surface. On the other hand, the more details you put into the horse, the heavier it becomes, and heavy models are more difficult to set up and animate, and they take longer to render. It's important, therefore, to maintain a fine sense of balance as you work—add details to the model, but only as many as necessary for the job for which you are building the model.
The Mouth Area Let's get back to detailing our horse. To build the mouth area of the horse, follow these steps: 1. Choose Edit Polygons Split Polygon Tool to draw another line of edges around the mouth area as shown in the upper left image in Figure 2.9. You can compare the head in the upper left image with the horse shown in Figure 2.7 to see the extra line more clearly. 2. Append faces to the edge of the mouth area by choosing Polygons Append to Polygon Tool, and move the vertices to refine the shape of the horse's head, as shown in the upper right image in Figure 2.9. I'll soon have more to say about splitting polygons properly for SubDivision surfaces, but for now, let's concentrate on building the mouth area of the horse. 3. Draw two lines from along the edge of the mouth area to intersect with the first line you drew, as shown in the image on the lower left in Figure 2.9, and then delete the faces to create the opening for the mouth, as shown in the image Figure 2.9: Extra lines are drawn and faces created to on the lower right in Figure 2.9. refine the horses mouth area.
40
C H A P T E R 2 • Modeling a SubDivision Surface Horse
Figure 2.10: The border edges along the mouth area are extruded and scaled in. 4. Select the edges around the deleted faces as shown in the left image in Figure 2.10, choose Edit Polygon Extrude Edge, and scale the extruded edges in. You might want to use the default manipulator handle that appears with the command, or you might want to switch to the regular Scale tool to do this, as shown in the middle image in Figure 2.10. Figure 2.11: Edge lines are drawn from the mouth area up 5. Choose Edit Polygon to the head. Extrude Edge one more time, and then move the edges and/or scale them to create thickness for the mouth opening area, as shown in the image on the right in Figure 2.10. Tweak the points to close the mouth a bit more. 6. Using the Split Polygon tool, draw edge lines from the mouth to the head as shown in the image on the left in Figure 2.11. If you try to convert the poly horse to SubDivision surfaces at this point, you will end up with a couple of "extraordinary" vertices as shown in the image on the right in Figure 2.11, which we do not want. Extraordinary points are calculation intensive and will slow down the modeling, animation, and rendering processes. Getting rid of these extraordinary points involves making all the faces quadrangular, or four-sided, which we will get to eventually. You shouldn't clean up the non-quad faces until you have finished adding all the details on the model. 7. Draw another edge line around the bottom of the mouth to shape it better, and draw two more around the top of the mouth as shown in the image on the left in Figure 2.12. 8. Move the points and draw more edges to create an area where the nostril will be created, as shown in the middle image in Figure 2.12.
Modeling Details
Figure 2.12: More edge lines are added around the mouth area to further refine it, and lines are also added to create a nostril area.
41
Figure 2.13: When the faces are kept triangular, the resulting SubDivision surface has extraordinary points, but when the faces are made into quadrangular faces, they convert to clean SubDivision surfaces.
9. Draw more edges to create an inner circular shape, and draw more edges around the mouth area to make the mouth edges protrude, as shown in the image on the right in Figure 2.12. Notice the progression of the nostril area, from a quadrangular face to a five-sided face to a six-sided face, which is then broken into smaller quadrangular and triangular faces. Also notice the way the extra faces around the mouth edge are kept as quadrangular faces.
If you convert the horse at this point, the triangular faces of the nostril area turn into a SubDivision surface with extraordinary points as shown in Figure 2.13, going from the image on the top left to the image on the top right. But when you select every other edge of the triangular faces and delete them as shown in the image on the lower left in Figure 2.13, thus turning six triangular faces into three quadrangular faces, the conversion to a SubDivision surface becomes cleaner, without any extraordinary points being created, as shown in the image on the lower right in Figure 2.13. 10. Select the center vertex of the nostril area and push it up and in. We can come back to the mouth area later for finer sculpting, especially if you want to build teeth into the mouth, but the work of adding edges is basically done.
The Eye Area Let's move on to the eye area. We started with the image in the upper right of Figure 2.14, and we drew edge lines to what's shown in the image in the upper right of Figure 2.14. Now, to build the eye area of the horse, follow these steps: 1. Complete the edge lines along the bottom of the eye area as shown in the image on the lower left in Figure 2.14, and then sculpt the head more to the likeness of the horse.
42
C H A P T E R 2 • Modeling a SubDivision Surface Horse
Figure 2.14: More edge lines are drawn to create the eye
Figure 2.15: Edge lines are created inside the quadrangular face and the innermost face deleted to shape and refine the eye area.
One technique you can use to keep the patches quadrangular as you refine the horse is to push out a vertex point from the edge line it's part of, as shown inside the circle in the image on the lower left in Figure 2.14. Once the vertex jots out from the line, it is easy to see that you can draw another line to split the large seven-sided face into two quadrangular faces, as shown in the image on the lower right in Figure 2.14. You can see another technique being used to keep the faces quadrangular inside the circle in the image on the lower right of Figure 2.14. What was a single edge in the image on the lower left has been replaced by two edges, thus creating an extra face. One of the quadrangular faces changes into a five-sided face as a result, but we'll correct this as we model on. The necessity for creating the extra quadrangular face becomes clear when you see that is where the hole for the eye will be created. 2. Zoom into the quadrangular face where the eye will be. 3. From the area shown in the upper left image in Figure 2.15, select the face covering the eye area. You can either delete the face, select the resulting boundary edges, and choose Edit Polygon Extrude Edge twice, or you can use the Split Polygon tool to draw the additional edges and delete the face at the center, to create the surface shown in the upper right image in Figure 2.15. 4. The four-sides of the eye area are too few to properly sculpt the eye, so draw an edge line starting from the upper middle of the eye hole, and draw another edge line starting from the lower middle. Don't worry too much about where these lines end up for now. Your only concern at this point is that the four sides have become six sides, as shown in the lower left image in Figure 2.15. 5. Draw yet another edge line going out to the side of the eye, and apply another Extrude Edge to the boundary edges of the eye so that you can create thickness for the boundary area, as shown in the lower right image in Figure 2.15. Now, let's turn to the way the edges coming out of the eye area are connecting to the rest of the head and clean up the topology. The area stretching around the eye has some triangular faces, shown in the upper left image in Figure 2.16, which create extraordinary points when converted to a SubDivision surface, as shown in the upper right image in Figure 2.16. When the topology of the area is changed to quadrangular faces as shown in the lower left
Modeling Details
Figure 2.16: Refining the faces to quadrangular faces will create a much cleaner SubDivision surface.
43
Figure 2.17: More details are added to the sides of the eye area to fine-tune the eye shape.
image in Figure 2.16, however, the resulting SubDivision surface is much cleaner, as shown in the lower right image in Figure 2.16. The subtle changes in the lines make a lot of difference in the SubDivision surface conversion. As you sculpt the eye area more, you'll soon find it necessary to further divide the side patches of the eye into smaller faces so that you can make the sides of the eye fold tightly. Notice how the smaller divisions go from the upper left image to the upper right image in Figure 2.17. A careful examination will show that all the new edges are quadrangular patches. This area, therefore, will convert to a SubDivision surface without any extraordinary points. The eye area that you've seen so far is actually a flattened-out version of the model. I did this so that you could more clearly see the topology of the patches. In the lower left image in Figure 2.17,1 sculpted the same surface with the same topology more realistically. Although the points have been moved around, making some parts of the area difficult to see, the topology of the images in the lower left and the lower right are the same. The image on the lower right shows the final resulting SubDivision surface area of the eye.
The Ear Area Detailing the ear area is similar to detailing the eye area. Starting from the basic head as shown in the upper left image in Figure 2.18, move vertices and draw edge lines to make smaller divisions. The lines added in the upper right image in Figure 2.18 are not randomly drawn, but carefully placed to prepare a five-sided face (shown inside the circle), which will be extruded to create the ear and also to make sure that we eventually end up with all quadrangular faces. Once you delete the unnecessary edges, you get the topology shown in the lower left image in Figure 2.18. Notice that, with the exception of one triangular face, all the faces are four-sided. The next step is to extrude the five-sided face as shown in the lower right image in Figure 2.18. Let's work first on the back of the ear. Follow these steps: 1. Push the extruded face out more, as shown in the upper left image in Figure 2.19. 2. Draw a line from the triangular face to the top of the horse's head as shown in the upper right image in Figure 2.19, and then draw another line to the top of the head from the back of the ear.
44
C H A P T E R 2 • Modeling a SubDivision Surface Horse
Figure 2.18: Edge lines are drawn to create a five-sided face, which is then extruded to create the ear.
Figure 2.19: The back of the ear area is refined.
3. Divide the faces at the back of the ear into triangular faces as shown in the lower left image in Figure 2.19. Notice that a diamond-shaped face has also been extruded out, which will become the inside of the ear. 4. Delete the three existing edges that shape the triangular faces at the back of the ear area to create new four-sided faces as shown in the lower right image in Figure 2.19. Notice that the original triangular face by the back of the ear is now also quadrangular. As for the front part of the ear, start with a flat five-sided face as shown in image A in Figure 2.20, but extrude a diamond-shaped quadrangular face as shown in image B. Push back the fifth edge to the back of the ear to create an arch for the inner backside of the ear as shown in image C. Detailing the inner parts of the ear is straightforward. Follow these steps: 1. Select the inner diamond-shaped face, and extrude it inward twice as shown in image D in Figure 2.20. 2. With the topology created, push the inner parts in and down, as well as the bottom points of the ear, as shown in image E in Figure 2.20. 3. Convert the poly surface to a clean SubDivision surface as shown in image F, and tweak the points to refine the shape of the ear. Convert back to a poly surface for further modeling.
The Body and the Front Leg In detailing the body, we'll start by adding lines to the neck area. From the rough body shown in the upper left image in Figure 2.21, draw an extra line through the neck and the chest, and draw another line to shape the front part of the neck. Reconfiguring the patches around the neck to keep everything quadrangular is not difficult, but as you can see in the upper right image in Figure 2.21, the chest area gets a triangular face. Let's first, however, further sculpt the chest and the body areas. Add another line to the chest as shown in the lower left image in Figure 2.21, and add yet another one around where the leg joins the body as shown in the lower right image in Figure 2.21. These additions and other refinements are necessary for the proper deformation of the horse during animation. Draw another line along the upper side of the body and also along the
Modeling Details
45
Figure 2.20: The front part of the ear is refined with extrusion, and one of the vertices is pushed to the back of the ear. bottom side of the body. Further sculpt the points to make the leg look more muscular and to bulge the stomach area. These additional lines are shown from two different views in the top two images in Figure 2.22. At any point of the detailing, you can try converting the poly horse to a SubDivision surface to confirm that all the patches are quadrangular simply by choosing Modify Convert Polygons to Subdiv. The parts of the horse shown in the lower left image in Figure 2.22 should convert cleanly to a SubDivision surface with no extraordinary points, as shown in the lower right image.
The Behind and the Back Leg The last areas to detail are the back leg and the horse's Figure 2.21: Edge lines are added around the neck and the behind. The basic poly shape was converted from the chest areas, and those areas are further sculpted. NURBS patches and merged as shown in the upper left image in Figure 2.23. The two lines added while doing the front part of the body terminate as shown in the upper right image in Figure 2.23, creating two five-sided faces in the process. Again, we'll first put in all the lines necessary for proper sculpting and deformation of the horse, and then we'll clean up the topology.
46
C H A P T E R 2 • Modeling a SubDivision Surface Horse
Figure 2.22: Any remaining triangular faces around the chest area are edited to become quadrangular faces, and conversion to SubDivision surface shows if everything is indeed quadrangular around the chest area.
Figure 2.23: Extra edge lines are added around the back leg area, and that area is further refined.
Insert a line from the top of the body to the stomach area as shown in the lower left image in Figure 2.23. Sculpt to more clearly define the back leg and the stomach, and add another line from the top of the body all the way down the back leg as shown in the lower right image in Figure 2.23. From the back view of the horse as shown in image A in Figure 2.24, draw another line on the back leg as shown in image B. Draw a line straight down the horse's behind until it joins the first edge of the back leg as shown in image C. The lines going down to the back center should also be horizontal. Delete the extra vertices at the center as shown in image D. Draw two more horizontal lines as shown in image E. Our last detailing task in terms of sculpting is to tighten the back leg, making it look leaner and more muscular as shown in the bottom image in Figure 2.25.1 moved and tightened the edges in such a way that it may appear as though I added extra lines , but compare the back and side views in the top and bottom images, and notice that no new lines have been added. You can now start on the final refinements of the patches. Figure 2.26 shows three examples of the triangular or five-sided faces being redrawn into quadrangular faces. Once you get used to "seeing" how the lines should be redrawn in any situation, splitting faces and deleting edges to clean up is not a difficult task. To confirm that you have turned the parts of the horse you are detailing into quadrangular faces, convert the horse into a SubDivision surface and see if you get extraordinary points, which are easily seen by the extra edges that surround them. Compare the images in Figure 2.27, and notice that extra edges surround the extraordinary points in the image on the left. When you are satisfied that the surface is clean, get into the front view window, snap all the vertices along the middle edge to the Y axis using the Snap to Grid function, and apply Polygons Mirror Geometry to the horse, making sure that the Mirror Direction is set to -X in the option box. Some of the open edges such as the eyes, the mouth, and the bottom parts of
Modeling Details
47
Figure 2.24: More lines are drawn around the horse's behind, and that area is further refined.
Figure 2.25: The back leg is refined to make it look more muscular.
Figure 2.26: Triangular or five-sided faces are redrawn to become quadrangular ones.
48
C H A P T E R 2 • Modeling a SubDivision Surface Horse
the legs might merge as well when they shouldn't. Simply delete those unnecessary faces. All the center edges of the horse should merge, but if some do not, you can merge those edges manually using the Edit Polygons Merge Edge Tool. On rare occasions when merging does not work (usually because of opposite normal vectors), you may find it most efficient to simply delete one of the two faces and create a new one using Polygons Append to Polygon Tool. Last, fine-tune the converted SubDivision surface horse in Standard Mode or Polygon Mode to get the final shape you want. Figure 2.28 shows the final horse model as a wireframe, and Figure 2.29 shows the final horse model as a rendered image. You can also see an animation of the textured and rigged horse by opening the c h 2 h o r s e w a l k . m o v file on the Figure2.27:The extraordinary points are shown in the CD-ROM. image on the left.
Figure 2.28: The wireframe of the final horse model is shown in shaded mode.
Summary
Figure 2.29: The horse is textured, lit, and rendered.
Summary This chapter took you step by step through the process of creating a SubDivision surface horse. Many modelers in the industry are using this process, in which the basic shapes are built with NURBS patches, converted to polygon patches, merged, refined, and then converted to a SubDivision surface at the very end. Using this process, we bypass the tricky work of keeping tangency between NURBS patches to make them appear seamless. As you refine the polygon model, always edit the edges in such way as to create four-sided faces so that there will be no extraordinary points when the model becomes a SubDivision surface. As far as I know, there is no exact technique or method for refining or redefining the polygon model's topology to create a clean final SubDivision surface. Nevertheless, experienced modelers recognize a cleanly built SubDivision surface when they see one: it has no extraordinary points, the edges form flowing lines that do not get cut off randomly, and those lines go around the surface areas in such a way as to produce correct deformation when properly weighted. But the best method for an aspiring modeler, in my opinion, is surely the one you acquire as you yourself cut the edges.
49
Organix—Modeling by Procedural Means Mark Jennings Smith
this chapter title, you might have thought that the spell checker malfunctioned or that I simply forced it to learn a new word, "organix." In truth, the spell checker doesn't like the word, since it's not part of the standard lexicon. "Organix" is a term I coined to describe an artistic style that combines simple shapes, simple math, and some deep philosophy but produces devilishly complex-looking forms. This chapter is far too constricted to properly cover the deep philosophical relationships of art, biology, physics, and good old Mother Nature, so I'll avoid as much of that as possible. For further edification, you can go to www.absynthesis.com. Instead we'll look at some simple techniques and some interesting examples that will more or less question the twisted cliche, "less is more." To put my work in a bit of historical perspective, I was profoundly moved by the work of William Latham, a British computer scientist who created amazing organic animation with his own proprietary software. "The Conquest of Form," Latham's first animation, was released about 1989. The forms in this animation are wonderfully textured and diverse. His hypnotic creations squirm, writhe, and mutate. These forms were "selfevolving" creatures constructed from algorithms Latham wrote based on certain base principles of natural selection and Darwinism—in other words, life and how it evolves. I am vastly oversimplifying a deep digital concept for the sake of brevity. Beyond the math, Latham's technique was
52
C H A P T E R 3 • Organix—Modeling by Procedural Means
devious in its simplicity, and in homage I call it systematic multireplication and include it in a wider discipline I began to call organix.
Stop and Smell the Roses: A Primer As CG artists we are asked to be mathematicians, theologians, chemists, anatomists, lawyers, advertisers, radicals, cartoonists, politicians, actors, philosophers, physicists, doctors, illusionists, dreamers, and creators. I can't think of a vocation that exposes one to such a range of disciplines, and admittedly I have learned things that I never thought I would. It's important to stop and smell the roses. No, I mean it! Really, both figuratively and literally. I do it all the time. And when you smell that rose, take a good look at its design: • A full bloom is made up of concentric layers of petals emanating from a single bulbous base. • The aged blossom unfolds, revealing its inner sanctum. Center stage is the pistil, which includes the stigma and style, surrounded by a ring of stamen. • The bud is nothing more than the neatly compacted version of an expanded flower blossom. • The smooth green stem is alternately peppered with sharp woodlike thorns and dark green leaves, with micro-fine serrated edges with thin stems. You could continue, perhaps microscopically, and see that Mother Nature is still concerned about her organic symmetrical design elements. In other words, nature is the inventor of systematic multireplication—everything from a spiral staircase to a pack of birth control pills. We always have been and always will be consciously and unconsciously influenced by the symmetry of nature. Figures 3.1 and 3.2 show some organic bits and pieces from my backyard that I have microscopically enlarged from 10X to 200X. Notice that even at this tiny level nature is systematic in its design choices.
Figure 3.1: Photos A, B, and C are various angles and magnifications of an animal bone. Photos D and E are various magnifications of a tree seed. Photo F is a mature grass species.
Stop and Smell the Roses: A Primer
53
The other thing you notice about natural objects is the duplication that takes place when nature creates. Replicating the simpler form leads to the more complex and interesting shape. The computer is a perfect workhorse for the geometric math involved in replicating forms. Every time you create a bicycle tire and spokes or a ceiling fan, you borrow from nature. Nature is not always exclusively organic. You can also find artistic inspiration in gravity, thermodynamics, and surface tension. As a reminder, I keep a chunk of wax near the rest of my computer adornments. I removed the wax from a dish that had been filled with coins. In the center of the dish was a large wine-colored candle, which eventually overflowed its seeping hot melted wax onto the coins. Retrieving my coins uncovered an amazing and inspiring form, shown in Figure 3.3. Gravity, thermodynamics, and surface tension were among the contributors to this bit of natural art. Figure 3.4 shows an image modeled in Xfrog and rendered in Maya. It was the direct result of finding the wax. This image is an example of how all experiences, great and seemingly
Figure 3.2: More organic bits and pieces microscopically enlarged
figure 3.3: An image of coins imprinted in melted wax
Figure 3.4: The resulting image derived and inspired from the wax
54
C H A P T E R 3 • Organix—Modeling by Procedural Means
insignificant, whether you are conscious of them or not, play a role in your creative strength. For so many good reasons, go outside and smell a rose. / thought I was a true geek the first time I sat in a chair and started to notice the specular highlights of the chrome tubing, and the diffuse shadows the fabric played over the wood. That was a long time ago, when it was all still so new. And now I notice that a slight reflection, a glint off of an edge of glass, can instantly put me in that mode where I think about sitting with sliders trying to recreate those visual instances. (Mark Sylvester, Ambassador 3D)
A Visual Maya Zoo I'm often asked how I create certain images. My moving images puzzle people. Noncomputer types get the three-word, short answer, "3D virtual spirograph." 3D people get the one-minute concept demo in Maya. The reaction is similar to that of a magician's fan who becomes privy to an easy trick they had thought difficult. The results of my little "trick," however, can be quite complicated and amazing. Let's begin modeling our first organix primitive. Follow these steps: 1. Load the Maya binary file c a r a p a c e . m b from the CD, and then open Maya's Outliner (choose Windows Outliner). 2. Select c a r a p a c e from the Outliner, as shown in Figure 3.5, and give it a quick render.
Figure 3.5: A rendering of the single carapace object hides the more complex results derived from its use.
The carapace object is a simple NURBS cone that has been distorted a bit. I put a simple ramp texture on it with earthy tones. I used versions of this same ramp for a bump map and incandescence maps. Which textures you use is an aesthetic judgment call, but contrasting patterns work great. I often liken this situation to a kaleidoscope: depending on the mixture of elements you place inside and the random spin, the results can be surprisingly dramatic. Whether you place corn kernels, broken glass, small screws, pebbles, or vitamins inside, the single units seem to lose identity in the whole pattern. 3. With carapace selected, group it to itself. Choose Edit Group Highlight Parent and Center, then click Apply. Name the group Unit_Segment, as shown in the graphic on the left, and then select Unit_Segment.
A Visual Maya Zoo
Now let's duplicate the group a bit and change our single organix primitive into a more complex shape. 4. Choose Edit Duplicate Set Translate to 0, 0.3, 0.3; set Rotate to 10, 0, 10; set Scale to 0.9, 0.9, 0.9; set Number of Copies to 39; and set Geometry Type to Instance, as shown in Figure 3.6. Set Group Under to Parent. That's it. Click Apply and check your result. You should have something that looks like Figure 3.7. Move your camera around a bit and check out the shape. 5. After you check your result, render it out, and then save your scene. Figure 3.8 shows several rendered angles of the new object, which is organic looking indeed. The simple shader that was placed on our base primitive object takes on a whole new texture life after it has been assigned individually on the newly replicated objects. The repetitive nature of the texture is reminiscent of thousands of insects and reptiles. The cone point takes on new significance as well because it appears to be part of a series of thorn, claw, or spikelike barbs. Notice too that we used instances instead of copies, which saves rendering time and memory. Using instances also gives us our backbone for entry-level animation, with which we will deal later.
Figure 3.6: The Duplicate Options dialog box
Figure 3.7: A hardware-rendered result of what your result should look like with the Outliner settings
55
56
C H A P T E R 3 • Organix—Modeling by Procedural Means
Figure 3.8: A collage of four separate angles of the new primitive showing its diversity and ability to blend separate components into one By simply altering the pivot point a single sphere can have wildy different duplication results when performed with the exact same parameters. The three sets of images shown in Figures 3.9 through 3.17, show the sphere before and after duplication. The first image of each series displays the highlighted relative position of the pivot point prior to duplication.
Bring Simple Life to a Creation Now let's look at how to apply some basic yet interesting animation to our organism. The motion we will create is deviously simple, yet the end results are sometimes quite hypnotizing and fluid. Follow these steps to continue the example in the previous section: 1. Load c a r a p a c e 2 . m b from the CD. Open the Outlinertodisplay40groups (Segment_Unit). Collapse the original group (Segment_Unitl). You will find the original carapace object we started with in the previous section. We grouped the original carapace to itself before duplicating those groups. We will work with that original object for now. Selecting carapace under Segment_Unit1 selects all objects that make up our organism.
Bring Simple Life to a Creation
57
Figure 3.9: The first image
Figure 3.10: Wireframe version of the render of Figure 3.9 shown at render angle.
Figure 3.11: Final rendered version of Figure 3.9.
Figure 3.12: The second image
Figure 3.13: Wireframe version of the render of Figure 3.12 shown at render angle.
Figure 3.14: Final rendered version of Figure 3.12.
Figure 3.15: The third image
Figure 3.16: Wireframe version of the render of Figure 3.15shown at render angle.
Figure 3.17: Final rendered version of Figure3.1S.
2. This object is set up for some interesting movement. Zoom your perspective view out so that you can get a decent view of the entire organism. 3. Now, let's translate the original carapace node. Test it first by a single translation in one axis. Notice how the entire shape squirms in accompaniment. After each single translation, be sure to undo again by pressing z (Undo) to return to the original state. Try two and then three translations before returning to the original state. Each translation is sequentially
58
C H A P T E R 3 • Organix—Modeling by Procedural Means
additive and significant to the next translation. Although the final destination of each translation is the same, the shape of the whole organism can differ dramatically depending on the translations. 3. You might have guessed that you can alter the base node by other means. Scaling and rotating the node, for example, will give you interesting results. With careless abandon, translate, rotate, and scale in XYZ. Don't fret about returning to your original state; the file is on the CD safe and sound. It is actually quite amazing how much fun it is to watch the shape change form. Believe it or not, you will eventually get a sense of the particular form and be able to judge how it will react. Remember as well that this form is easy to create yet yields some complex results. You can create more complex organisms that expand wildly on the eye-candy factor. Expanding further, each Segment_Unit## has a child node of its own. Selecting these carapace nodes and manipulating them will develop different animations entirely. 4. Select several of the other carapace nodes under some of the other numbered Segment _Units, and witness the behavior of the object as you tweak several nodes in combination. 5. Create a quick keyframed animation to get an idea of the interesting motion you can achieve. Keep it simple. Translate your original carapace node once or twice over 300 frames. You can also scale it and rotate it. Playblast or render out your animation. I have included a rendered animation on the CD called S q u i r m 3 2 0 Q T s o r e n 3 . m o v . Parts of this animation are reminiscent of color-shifting octopi and their dexterous tentacles. Even a little keyframing produces some interesting animation. It is always best to keep your keying work rather simple until you begin to understand the complexities of a form. Each object is on its own axis but is controlled by at least a single other source at any time. When you begin to introduce too many actions, conflicting rotational data can easily result in animation that jumps from key to key rather than smoothly transforming. This looks much like the old gimble lock problem so common in the early days of3D animation. So remember: keep it simple.
Abusing the Cone
59
As you recall, we instanced one original object to create our organism. Pros and cons are associated with instancing, but instancing suits our purposes famously. Here are a few rules to remember about geometry copies and instances: • Copies are just that—identical copies of the geometry. Instances rely on the data that composes the original geometry. An instance is a displayed representation of that original geometry. • Using instances takes much less memory, renders faster, and reduces the size of a scene file. • You cannot alter instances directly. Any change in geometry placed on the original is reflected immediately in the instances. • Instances cannot be assigned alternate shaders. Changes to the shading network of the original object are reflected in the instances as well. • You can duplicate and instance lights, but instanced lights will have no effect in the scene (so I'm not really sure why you would want to do that). The animation S q u i r m 3 2 0 Q T s o r e n 3 . m o v on the CD-ROM expresses a great range of organix movement. We derived something quite complex from simplicity. As you begin to add other elements to the animation equation, you will certainly deal with more variety. Let's take animation a few degrees further by adding some tricks. On paper it will not look like much, but the results will show otherwise.
Abusing the Cone A cone is a basic geometric primitive in any 3D package. Make that geometry a NURBS cone, and you have the makings of some wild animation. This is another simple example of organix that shows that less can be more depending how you look at it. To see just how much torture a NURBS cone can endure, follow these steps: 1. Load conel.mb from the CD. Figure 3.18 shows the parameters for the cone at this point. Nothing differs from the default NURBS cone other than that we want to set the Number of Spans to 5 and cap the bottom of the cone. Now let's elongate the cone. 2. Scale the cone to 0.8, 2.0, 0.8. Rename it to i n s t a n c e d _ t h o r n _ c o n t r o l , select i n s t a n c e d _ t h o r n _ c o n t r o l , and group it to itself. Call the Parent s i n g l e _ t h o r n _ c o n t r o l .
Now that the cone is a bit longer and thinner, let's move its pivot point. Actually the default position of the pivot point for the i n s t a n c e d _ t h o r n _ c o n t r o l object (cone) is fine; we are primarily interested in the Parent. 3. Select s i n g l e _ t h o r n _ c o n t r o l . Again, I like the Outliner to do this, but it's your workflow. Go with it!
Figure 3.18: The NURBS Cone Options dialog box
60
C H A P T E R 3 • Organix—Modeling by Procedural Means
Figure 3.19: A cone with its pivot point offset from its original location within the cone now is markedly distant from its geometry.
4. Press the Insert key (the Home key on a Mac system) so that you can move the pivot from its original position. Translate the pivot over eight units in the X direction. If your cone moves, you didn't press Insert properly or at all. 5. After you establish the new pivot position for the cone, press Insert again to disable the ability to translate the pivot and return you to the normal possible translations of the group. 6. Now that the s i n g l e _ t h o r n _ c o n t r o l pivot is offset, move the group until its pivot is at 0,0,0. Figure 3.19 shows the results. By offsetting the pivot point of an object or group, the Duplicate tool becomes much more interesting, especially for working with organix forms. This offset technique is a primary tool for achieving interesting and varied results.
The Duplicate Options dialog box displays the values from its previous use. This feature can be great for helping you to remember your last set of parameters if you want to use them again. It also comes in handy for slight variations of those original parameters, lf your new parameters are completely different, it's best to start from scratch (with the default settings). In the Duplicate Options dialog box, choose Edit Reset Settings to reset the parameters to the default. I find that resetting also gives you a better mental image of what you are doing and that you are less likely to make input mistakes.
Abusing the Cone
Figure 3.20: The results and the settings
Now let's run this baby around the proverbial horn by duplicating it. 7. Change to the front view if you are not there already. Select s i n g l e _ t h o r n _ c o n t r o l to make it active, and then choose Edit Duplicate to open the Duplicate Options dialog box. 8. Choose Edit Reset Settings to flush out the old parameters. 9. We want to create 39 more instances of this group for a total of 40, so set Number of Copies to 39, and select Instance as the Geometry Type. The only other parameter we will change is to adjust the rotation of each instance by 9 degrees on the Z-axis. Remember that because our parent group has an offset pivot point, the result will be dramatically less crowded than our previous example. 10. Click Duplicate to see your result. Figure 3.20 shows what you should have, and the inset shows the parameters used to achieve it.
61
62
C H A P T E R 3 • Organix—Modeling by Procedural Means
This particular s i n g l e _ t h o r n _ c o n t r o l is the parent of the original i n s t a n c e d _ t h o r n _ c o n t r o l and therefore the only one not numbered. Notice that each child of subsequent s i n g l e _ t h o r n _ c o n t r o l # # nodes ( i n s t a n c e d _ t h o r n _ c o n t r o l ) does not have numbers indicating its generation, lf l had attached a numeral 1 to the end of the original s i n g l e _ t h o r n _ c o n t r o l before duplicating it, Maya would have named the first instance s i n g l e _ t h o r n _ c o n t r o l 2, the second s i n g l e _ t h o r n _ c o n t r o l 3, and so on. I prefer to distinguish the original by no number at all. It makes more sense to me, but do what seems logical to you.
11. Save your scene and load c o n e s 3 . m b from the CD. The file c o n e s 3 . m b is the example up to this point, with the addition of shaded cones. I simply dragged a simple blinn shader with a color ramp onto the cones, and I placed a few lights in the scene. The only other difference is that the object has been scaled down to 0.33 of its original size. 12. When you open the outliner, you will notice our group hierarchy m a s s _ c o n t r o l s i n g l e _ t h o r n _ c o n t r o l # # i n s t a n c e d _ t h o r n _ c o n t r o l # # . We are primarily interested in s i n g l e _ t h o r n _ c o n t r o l i n s t a n c e d _ t h o r n _ c o n t r o l . 13. Select s i n g l e _ t h o r n _ c o n t r o l i n s t a n c e d _ t h o r n _ c o n t r o l , and play around with it a bit. Scale, rotate, and translate the i n s t a n c e d _ t h o r n _ c o n t r o l . Again you'll see an amazing amount of interesting instanced control over all the cones. The replicated cone points will often look like barbs, thorns, teeth, or claws, and the layered cone bases will often resemble scales. If you get hopelessly lost, load the Maya scene file again. Figure 3.21 shows some of my variations. Again you see the power of the instanced object and using this technique to create some interesting images quickly and with little fuss. As I mentioned, you can add other things to the mix. 14. In the Outliner or Hypergraph, select the 39 numbered instances of s i n g l e _ t h o r n _ c o n t r o l # # . In the Channels box, and find the Visibility parameter. We want to turn off the visibility of those 39 instances temporarily, so change Visibility to Off, or enter 0 in the parameter box. All 39 cones should disappear, leaving the original unnumbered s i n g l e _ t h o r n _ c o n t r o 1 , as shown in Figure 3.22. This gives us room to focus on our original cone without clutter. The reason for creating the original cone with so many spans is so that we can really "abuse the cone." The more rambunctious we get with our node transformations, the more we test the patience of the NURBS cone to behave smoothly. That said, let's create some clusters on our cone. 15. Select s i n g l e _ t h o r n _ c o n t r o l , and press F8 on your keyboard to turn on SelectToggleMode. Now you can select CVs to group into clusters. 16. Select the point CVs of the cone; we will make that our first cluster. Under Animation, choose Deform Create ClusterCone. Maya creates a cluster out of this top group of CVs, names it c l u s t e r l h a n d l e , and places a C in the interface representing it. You'll see this in any orthogonal or camera view. Rename the c l u s t e r l h a n d l e to t h o r n _ p o i n t .
Abusing the Cone
Figure 3.21: Variations on a theme
We want to create a cluster for every planar level of CVs on the cone, using the method just outlined for the point. Out of the bottom two levels we will create a single cluster and call it t h o r n _ b a s e . Figure 3.23 shows the clusters and their naming conventions from the point to the base. There is not a specific end goal here. What I'm presenting is more a theory-based concept. These techniques are guidelines for creating the abstract, which can serve a useful purpose for attaining a certain look or effect. Beyond the concepts, there is no right or wrong here, and experimentation and imagination are the key to creating something interesting. You'll never see a tutorial for kaleidoscope operation stating "Shake your kaleidoscope this way to create this pattern."
63
64
C H A P T E R 3 • Organix—Modeling by Procedural Means
Figure 3.22: The original unnumbered s i n g 1 e _ t h o r n _ c o n t r o 1 If you got lost, c o n e s 3 b . m b on the CD will get you to this point in the process. The file c o n e s 3 c . m b is the same as c o n e s 3 b . m b except that the cone is already animated in c o n e s 3 c . m b . Animating a cluster is fairly routine. Since we are not doing a walk cycle or something that requires a significant set of ordered rules, we don't need to spend a great deal of time animating the cone. With eight sections and five spans, the cone is certainly pliable. Let's animate a single cluster. 17. Load c o n e s 3 b . m b from the CD. Go to the front view; we will animate from this view.
Figure 3.23: The clusters and their naming conventions
Alternate Uses of Paint Effects
18. Open the Outliner, and slide the seven clusters into view. We are going to create the most basic cyclical animation possible (one that repeats exactly after a certain number of frames). 19. We want to create a 90-frame animation, so set the Time Slider for 90 frames. Select the cluster t h o r n 5 h a n d l e . Put the Time Slider first on frame 0, and create a key. 20. In the Channel box, choose Channels Key All. Now slide your Time Slider to frame 89, and once again choose Channels Key All in the Channel box. Slide down to Frame 30, and move cluster t h o r n 5 h a n d l e 0.4 units in the X direction and create a key. 21. At frame 60, move t h o r n 5 h a n d l e to -0.4 in X and choose Key All again. You have now created a short cluster animation. Set the Time Slider to run from frame 1 to 89 (not 0 to 89) and play it back. This is not the most exciting animation in the world, but it will prove quite useful in adding secondary animation. Remember that by adding this animation to the original cone, all the cones will mimic this motion. You buy yourself a lot of syncopated razzle dazzle with little effort. 22. Stop the animation, and open the Outliner. We now want to turn the visibility back on for those 39 instances that we made disappear. Select the 39 numbered instances of single_thorn_control##.
23. In the Channel box, find the Visibility parameter. Change Visibility to On or enter 1 in the parameter box. All 39 cones should reappear, showing the full complement of 40 cones again. Now if you scrub through your 90 frames, you will see that all the cones share the cluster's animation. Make changes to the original node, and run the animation again. You have now added another level of complexity to your organism. Additionally, just as with our first organix primitive, each i n s t a n c e d _ t h o r n _ c o n t r o l # # child of the s i n g l e _ t h o r n _ c o n t r o l # # parents is tweakable. Since each one is an offset of the next, they will control the whole organism slightly differently. Close this scene and open c o n e s 3 c . m b . This scene is identical to the previous scene except that I spent a little more time on the animation of the cone. I actually took advantage of all the clusters that we had previously made. 24. Set the Time Slider to 1000 frames, and then play the animation. You can see the resulting animation created by using the extra clusters. Again, as we have done previously, make the rest of the organism visible. Without adjusting nodes you can see that the animation already has a cool impact on the form. Once you begin tweaking the nodes, the results will be more prominent. Included on the CD is a 1OOO-frame animation that incorporates the scaling, rotation, and translation of instanced nodes. It also incorporates the secondary cluster animation and some camera and light movement, which, as you can see, adds yet another level of complexity for experimentation.
Alternate Uses of Paint Effects While I can safely say that I have used almost every software package known (OK, maybe not every one), I am always looking for something unique or exemplary. I take a non-platform/nonpackage approach to getting a piece of animation or art as I envision it. To that end, I don't discount any software on any platform if it is going to provide me with a solution. The all too
65
66
C H A P T E R 3 • Organix—Modeling by Procedura l Means
prevalent attitude of operating system snobbery gets us nowhere as artists. Even if you make a heavy investment of both time and money in a certain software package, don't let that blind you to a nifty toolset if it is within your reach. Later in this chapter, I'll describe some software I use in concert with Maya. When Paint Effects was introduced into Maya, it was an astounding advancement and was used for everything from flowing grass to perspiration. The sheer volume of animatable parameters was enough to boggle the mind. I soon got bored painting trees and decided to see how it could be used otherwise. While testing the limitations of Paint Effects, I was disheartened to find that you couldn't paint polygons. I model in other programs for various reasons, and that makes importing NURBS into Maya a problem. My trouble arose when I tried to paint hair on a polyface model. But it didn't take me long to figure out that I could parent a Maya NURBS proxy to my object, fashion it similarly in form, make it paintable, and then turn it invisible. This became the cornerstone of cool things to come. The following example shows a model of my sister-in-law's face. The polygon model was fitted with a "NURBS skull" proxy for the purpose of receiving Paint Effects strokes. As in real life, I made sure that the hair covers her unsightly scars (uh, I mean polygon edges) (see Figure 3.24). You can work along by opening s c a r y _ f a c e . m b from the CD.
It is easy enough to introduce a NURBS sphere into the scene that would serve as Jennifer's replacement skull. The skull does not need to actually look like a skull, but it is important to make sure that the paintable NURBS surface will indeed fit within the edges of your
Figure 3.24: The imported 3D model of my sister-in-law Jennifer poses a problem for Paint Effects hair attachment. Maya does not yet allow for strokes to be attached to polygons in the normal fashion.
Alternate Uses of Paint Effects
67
polygons. In this example, hair, which protrudes from the scalp anyway, will suffer little from this cheat. Hair can hide scars, tattoos, hickeys, and, in this case, an actual cranium! Over the years, I've found it best to create an obnoxious color for my proxy, because this will make it stand out amidst the hair, pointing out heavy clumps of hair while immediately identifying bald spots. A single Paint Effects stroke can cross multiple surfaces. This is a cool feature and is open for serious experimentation. I noticed while painting different hairstyles on various-shaped head models that a proxy skull wavered dramatically in shape from character to character. This posed a challenge depending on the hairstyle you are trying to achieve. I found it easier to place multiple paintable surfaces together as a great foundation for laying hair. A stroke can be continued across several NURBS surfaces, making it easier to judge how a Paint Effects curve will react. A single curve can then be tweaked further by nudging whole NURBS surfaces around (usually spheres) for the right look. Try it. It will give new meaning to CG hairstyling. Let's try painting a few locks of hair on a proxy skull. Notice that in the process we are painting across multiple surfaces. First, open the file s c a r y _ f a c e . m b on the CD. In this file, I added supplemental geometry to the side of the model's head. I was having difficulty clearing the side of her cheekbones properly. By adding an elongated NURBS sphere (see Figure 3.25), I could easily start from the top of her head and sweep around her cheek without making the hair protrude oddly through her lovely high cheekbones. When you are reasonably happy with your skull adjustments, it is time to test the head for probable adjustments. Select all the NURBS surfaces you added as your proxy, and then choose Paint Effects Make Paintable. Make any brush choice from your Visor and add a few strokes. Remember that holding down the b key allows you to scale the size and flow of your brush stroke. Whatever stroke you use, be it corn or red Mohawk, you'll see it follow your proxy skull. Figure 3.26 should look something like your image.
Figure 3.25: A side wireframe view of two distorted NURBS spheres that will stand in for already removed polygon geometry. Because Paint Effects strokes will not adhere to polygons, I used NURBS replacements.
Figure 3.26: Two brightly colored NURBS spheres are made paintable for accepting Paint Effects brush stokes. Here a variety of vegetation crosses the skullcap and cascades down the side of the head. Notice that a brush stroke will continue across two separate paintable surfaces.
68
C H A P T E R 3 • Organix—Modeling by Procedural Means
Figure 3.27: The top-down view of the bead model with real Paint Effects hair brush strokes. The NVRBS proxies used to replace the skull are still in place but have been toggled invisible in the Channel Editor.
Now make all your proxies active at the same time. In the Channel Editor, locate Visibility and turn it off. Now all your proxy skulls should have disappeared. Try adding a few more strokes to your "invisible" objects. The strokes will continue to cover the surface regardless of the status of their visibility. As I mentioned, the obnoxious color shaders applied to your proxies serve a purpose. Make your proxies visible again. You'll see how much easier it is to paint hair when you know where you're painting. Try painting some decent locks on the scary face model. See if you can complete the entire head, making it as real as you can. It's actually quite amusing. Figures 3.27 and 3.28 show one possible outcome to our model's hair replacement surgery. How does this fit into organix? Well, the realization that I could paint across more than one object and turn those object(s) invisible prompted me to consider further uses—or abuses—of the technique. The ability of a single Paint Effects stroke to cross multiple surfaces made me curious. It added a whole new realm of possibility to my abstract beasties, while creating a new way to toy with Paint Effects strokes.
Organix and Paint Effects It's always great to have a new tool. I naturally wanted to adapt Paint Effects into an interesting and unconventional organix tool. Post haste I delved into the possibilities of some new organix forms. As I learned what Paint Effects was capable of, I began to see great untapped potential in the brush strokes that it generated.
Organix and Paint Effects
Figure 3.28: The rendered result of a few well-placed Paint Effects strokes does a fairly decent job of giving ]ennifer a new cyber-style.
Duplicating Unattached Paint Effects to Create Larger Grouped Objects On the CD I have placed a simple Maya binary, S i n g 1 e S t r o k e . m b . This scene file is basic Maya. What's important here is the thought process and concepts. We'll consider and utilize Paint Effects brush strokes as objects unto themselves. Therefore, applying the same principles of multireplication and instancing, you can create your own organix primitives. These organix primitives, composed of grouped Paint Effects brush strokes, will become a form of pseudo geometry. They'll hold some 3D standard NURBS and polygonal geometry characteristics. Some pros and cons are associated with using these primitives. In some situations, using a Paint Effects tree over a polygonal model of a tree is advantageous; in other situations, Paint Effects is clearly not up to the task. You can quickly assemble an orchard of apple trees for backdrop scenery, but placing a bird's nest and avian interactions with the tree is best done by using actual geometry.
69
70
C H A P T E R 3 • Organix—Modeling by Procedural Means
Figure 3.29: The Duplicate Options dialog box
Figure 3.30: The perspective window
Lets look at a few examples of duplicating Paint Effects brush strokes as geometry. Follow these steps: 1. Load the Maya binary S i n g l e S t r o k e . m b from the CD, and then in the Outliner, choose Windows Outliner. 2. Select s t r o k e G o 1 dl from the Outliner and rotate it 90 degrees on the X-axis. Now render it. The brush stroke is one of the default Paint Effects metal brushes in Maya. 3. Now let's duplicate the brush stroke a bit and change its form. Choose Edit Duplicate We will want to alter some parameters. Change Translate to 0, 0, 1.5. Change Scale values to 1.0, 0.8, 0.7. Set the Number of Copies to 19. Geometry Type is Instance. Set Group to Parent. That's it! Make sure s t r o k e G o l d 1 is still active. Click Apply and check your result. Figure 3.29 shows the correct parameters for the Duplicate Options dialog box. 4. Check your result in the perspective window, as shown in Figure 3.30, and then render it out. Your results should look something like Figure 3.31, depending on your camera angle. 5. In the Outliner, select all s t r o k e G o l d # brush strokes and group them together. Label this new group S t r o k e G o l d G r o u p l and close it. The curve that you should not select is c u r v e G o l d . Rename that to S t r o k e G o l d C u r v e l C o n t r o l . Save your scene file. It is S t r o k e G o l d G r o u p l C o n t r o l .mb on the CD. Figure.3.3l:The results of duplicating Paint Effects
Organix and Paint Effects
Let's take this a tiny bit further. Follow these steps: 1. Load S t r o k e G o l d G r o u p l C o n t r o l . m b from the CD. 2. If necessary, open the Outliner. Select the S t r o k e G o l d G r o u p l group only. What we have now is a Paint Effects primitive fresh from the oven and ready for further experimentation. Let's scale it down to a smaller size by changing the Scale XYZ values in the Channel box to 0.2, 0.2, 0.2. 3. Rename S t r o k e G o l d G r o u p l to S t r o k e G o l d G r o u p . As I mentioned earlier in the chapter, I prefer to designate the original with no numbers. Open S t r o k e G o l d G r o u p to view the child brush strokes that constitute it. Select brush strokes s t r o k e G o l d 1 through s t r o k e G o l d l 2 , and group them together (select them all and press Ctrl+g). Name that group S t r o k e P a c k . Performing this step gives us control over the group as a whole. 4. Now let's duplicate our organix primitive. With S t r o k e G o l d G r o u p l still selected, choose Edit Duplicate Reset your parameters to default values by choosing Edit Reset Settings in the Duplicate Options dialog box. Set the Translate StrokeGoldGroupl parameters to 0.5, 0.5, 0.5 in X, Y, and Z, respectively. Set Rotate values to 20.0, 20.0, 20.0 in XYZ. Set Number of Copies to 19, and set the Geometry Type to Instance. Figure 3.32 shows the proper parameters. Click Apply and frame your result in the perspective window so that you can see the entire chain of duplicates. Now you can see why we scaled our model back down a bit. Figure 3.33 should resemble your output. Render it out to see your result (see Figure 3.34)
71
72
C H A P T E R 3 • Organix—Modeling by Procedural Means
Figure 3.32: Setting the parameters for the duplication of StrokeGoldGroupl
Figure 3.33: The wireframe perspective view of the duplication result
This image may not be dropdead gorgeous, but we have created an organix model with several individual controls that operate the identifiable parts both en masse and separately. The lone curve, S t r o k e G o l d C u r v e l C o n t r o l , simultaneously controls the scaling, translation, and rotation of every instanced brush stroke created from it. Selecting any of the S t r o k e G o l d G r o u p groups controls that individual group of curves as a single unit. The S t r o k e P a c k groups Figure 3.34: The rendered version of the perspective view control the brush strokes as a with new duplicated results single unit, but affect the scaling, rotation, and translation of all StrokePacks simultaneously. Load the completed scene file S t r o k e G o l d C o n t r o l C o m p l e t e . m b from the CD and toy around with the controls a bit. I used a relatively simple Paint Effects brush stroke, but as with geometry, you can see the potential for some elaborate displays.
The Chinese Dragon Revisited In the popular Sybex Mastering series, I created an image for Mastering Maya Complete 2.1 received a great deal of positive response about the image, but there was no tutorial or other mention of how it was created. This was partially because Paint Effects was a not a product at the time, and thus the image was created with as yet unreleased tools. While creating Chinese Dragon (see Figure 3.35), I applied organix principles to Paint Effects brush strokes for the first time. I used some knowledge gained from my proxy skull experiments to create the image. Let's dissect the original Maya binary and uncover a few other interesting techniques. Follow these steps: 1. Load C h i n e s e _ D r a g o n _ r e v i s i t e d A . m b from the CD. I have hidden a lot of clutter from view so that we can focus on part of the dragon (see Figure 3.36). The NURBS sphere, P r o x y _ S p h e r e , was placed in the scene to accept Paint Effects strokes. Instead of hair, I
Figure 3.35: The Chinese Dragon from Mastering Maya Complete 2. The dragon was rendered with an alpha channel and composited with the sky background, painted in Photoshop.
74 C H A P T E R 3 • Organix—Modeling by Procedural Means
Figure3.36: The result of loading C h i n e s e _ D r a g o n _ r e v i s t e d A . m b with most geometry and brush strokes invisible
chose one of the metal brushes. I painted a single brush stroke onto P r o x y _ S p h e r e , which I had previously made paintable. The intention here was to multireplicate the curve many times over and then make P r o x y _ S p h e r e invisible. Here are some things to note: • A Paint Effects brush stroke applies to a NURBS surface similar to curves on surface, but it is not attached to the surface exclusively. In other words, the brush stroke can be removed from the surface of the NURBS sphere as an independent node, yet retain its original painted shape. • Altering the shape, translation, or rotation of P r o x y _ S p h e r e will affect its assigned brush strokes in kind, regardless of the proximity of each to the other. See Figure 3.37. • Paint Effects brush strokes can translate, rotate, and scale independently of the object on which they were painted. See Figure 3.38. 2. Load C h i n e s e _ D r a g o n _ R e v i s i t e d B . m b from the CD. 3. In the Outliner, open the Tendrils group. Paint Effects brush stroke s t r o k e l is the sole child of the Tendrils group. This is the stroke painted on P r o x y _ S p h e r e . Perform actions on both s t r o k e l and P r o x y _ S p h e r e , and notice how they affect each other. The earlier bulleted items may seem trivial, but, as you can see from this example, they are at the core of organix animation using Paint Effects.
Organix and Paint Effects
Figure 3.37: Various simultaneous actions on assigned brush strokes by altering the paintable object
4. Reload C h i n e s e _ D r a g o n _ R e v i s i t e d B . m b , and run through the animation slider with the s t r o k e l brush stroke selected. You'll see that it has been previously animated with a rotation around its X-axis. Because it shares the same origin as P r o x y _ S p h e r e , it appears to act as though it were an animated curve on a surface when animated. This illusion breaks down quickly, however. 5. Reload C h i n e s e _ D r a g o n _ R e v i s i t e d B . m b , and select P r o x y _ S p h e r e in the Outliner. Scale it in the Y-axis until your sphere becomes an egg. The brush stroke will scale with the sphere as expected. Create a keyframe at zero for the new Y scaled value of
75
76
C H A P T E R 3 • Organix—Modeling by Procedural Means
Figure 3.38: Various actions of a Paint Effects brush stroke performed independently from the NURBS object on which it was painted P r o x y _ S p h e r e , and run through the Time Slider again. You'll see now that the curve on surface animation was just an illusion. 6. Reload C h i n e s e _ D r a g o n _ R e v i s i t e d A . m b . (Make sure you load C h i n e s e _ D r a g o n _ R e v i s i t e d A . m b , not C h i n e s e _ D r a g o n _ R e v i s i t e d B . m b . ) OpentheTendrils group, and you will see the rest of the duplicated brush strokes. There are a total of 60 instances of s t r o k e 1 . By selecting various numbered brush strokes, you can easily see that all instances do not lie on the surface of P r o x y _ S p h e r e . Performing any actions directly on any single brush stroke will not affect the others as you might expect. Try it for yourself. If you lose the original configuration, simply reload C h i n e s e _ D r a g o n _ R e v i s i t e d A . m b .
• Organix and Paint Effects
Great! Now we can begin to see something more interesting by toying with Proxy_Sphere. 7. Load C h i n e s e _ D r a g o n _ R e v i s i t e d C . m b .
8. In the Outliner, select P r o x y _ S p h e r e and go to the camera l perspective view if you are not already there. 9. Press w on the keyboard to select Translate mode, and select P r o x y _ S p h e r e at its origin handle. Moving the sphere around in 3D gives you a good idea why I called my scene Chinese Dragon. " The original brush stroke, while animatable, will not transfer its actions to its 59 instanced strokes as you might suspect. Scrolling through the Time Slider makes it obvious that the instances are reacting to the animation on s t r o k e 1 . However, I created this animation before creating the instances by animating the curve created by the initial brush stroke. This curve has since been deleted, but would be required to perform instanced stroke animation.
10. To make P r o x y _ S p h e r e invisible, turn off Visibility in the P r o x y _ S p h e r e Channel box by entering off or 0 in the visibility channel. Turning off visibility does not, however, stop you from animating P r o x y _ S p h e r e to control the 60 Paint Effects brush strokes. Selecting P r o x y _ S p h e r e in the Outliner or Hypergraph and selecting an action displays the proper handle, and you can set keyframes on the invisible object just fine! If you are using this invisible control technique to animate a Paint Effects organix primitive, it makes sense to animate with the object invisible. 11. Load C h i n e s e _ D r a g o n _ F i n a l . m b from the CD, and open the Outliner once more. This is the final version of the binary, updated to Maya 4. The animation of the dragon is also rendered as a QuickTime Movie on the CD and is named c h i n e s e _ d r a g o n _ r e v i s i t e d 3 2 0 Q t s o r e n 3 . m o v . The rest of the scene is composed of two separate organix chains, created with instanced NURBS geometry. The original geometry on each chain is nothing more than a distorted, 3D-textured, NURBS sphere, animated by the same techniques for instanced geometry described at the beginning of this chapter. I also added a second instanced Paint Effects primitive to P r o x y _ S p h e r e and animated it in exactly the same fashion. The final animation was the direct result of experimentation, with no particular goal in mind other than interesting organix type motion.
Other Organic Programs and Maya I use several programs in conjunction with Maya to create organix (as well as other things). One that stands out as a sleeper hit and bears mentioning is Xfrog from Greenworks. Xfrog is a procedural modeling and animation program that fits into the core of the organix ideal. With a set of Xfrog "components," you can design sophisticated organic models. Using Xfrog, I quickly modeled the multi-legged creature in the image "Pinos Mellaceonus" (see Figure 3.39). Greenworks has graciously allowed us to put a full demo of Xfrog 3.5 on the CD. It includes tutorial information and some great models and animation files. I have used Xfrog
77
78
C H A P T E R 3 • Organix—Modeling by Procedural Means
Figure 3.39:1 modeled the "Pinos Mellaceonus" insect in Xfrog. Texturing and other models were done in Maya.
to create both real and abstract models. The palette of tools is not overwhelming but allows you to fashion some highly detailed models, be they imagined or real. A reasonably proficient user can model anything from a pineapple to a whole banana tree (see Figure 3.40). Earlier I mentioned that you can finely craft the intimate parts of a flower to exacting detail using this program. Although you would be hard pressed to model a horse (impossible I think), anything natural that conforms to organix type rules is fair game for Xfrog. A fly's eye to rows of teeth or a volvox to salmon roe, and any plant imaginable, are well within the grasp of Xfrog. It also has some interesting animatable features that allow you to animate a finely detailed tree or plant, from seed to maturity. Greenworks has written a Maya plug-in that imports Xfrog-created models and animation (. x f r files). As you go through the Greenworks tutorials on the CD, some texture problems might arise. Xfrog uses PNG and the full supported TIF image formats. Maya once read PNG in an earlier version, but this functionality was removed for some reason in later versions. Maya has never correctly read the full flavor spectrum of the TIF file format. These are shortcomings on Maya's part that I have been lobbying to have corrected, but at present they are a bit of an
Organix and Paint Effects
Figure 3.40: The Xfrog interface with the highly detailed and textured model of a banana tree
intractable problem. However, the Maya plug in does a great job of importing the models and the Xfrog animation as well. The CD includes a few Xfrog files that I have converted into Maya 4format. Also on the CD is an original abstract animation ( X f r o g _ p r i m e d 3 2 0 Q T s o r e n 3 . m o v ) I created in Xfrog and then imported to Maya for texturing and rendering. The Tip of the Food Chain As forewarned at the beginning of chapter, in these pages we have barely nicked the surface of organix. The simple techniques and examples in this chapter, albeit fascinating in their own right, represent a small portion of the 3D basics of organix. Years ago pioneers were interested in the biologic theory of computer graphics. My second strong CG influence was the work of Karl Sims. His groundbreaking animation "Panspermia" was the talk of Siggraph in 1990. Sims wrote his own software based on biological "self-evolving" algorithms that created, moved, reproduced, and evolved 3D geometry (biological geometry) based on rules gleaned from nature. I hope the images in Figure 3.41 are inspirational to you. There is an amazing cult of individuals interested in self-evolving computer graphics. If you're interested, you can head to my website at www.absynthesis.com to find out more.
79
80
C H A P T E R 3 • Organix—Modeling by Procedural Means
Figure 3.41: A series of images from the award-winning animation "Panspermia" by Karl Sims. See h t t p : / / w e b . g e n a r t s . c o m / k a r l / i n d e x . h t m l for more information on Karl's work. All images © Karl Sims. All rights reserved.
Organix and Paint Effects
Summary As we come to the end of this chapter, I offer some final parting ideas and thoughts of what was hinted at yet not given treatment here. The idea of passing a brush stroke across multiple surfaces led me to think about Paint Effects and Maya Dynamics. The ability to apply hard/soft body dynamics to Paint Effects laden geometry gives some added intrigue to animation. It also then stands to reason that we can affect Paint Effects brush strokes in other dynamic ways as well. Forces such as gravity hold interesting promise for experimentation too. I have done interesting experiments with deformers and constraints (springs, nail, and hinge). MEL as a programming language is formidable in developing self-evolving Maya worlds right within the program itself. I have included on the CD some simple Maya binary files to enable you to delve into a few of these issues. By dissecting these binaries, you might spark your own ideas and move in new directions. These files are located in the chapter directory under B o n u s _ B i n a r i e s . Whatever you do, experiment with abandon, and have fun!
81
Chapter Four:
Animation and Motion CaptureWorking Together
Chapter Five:
Lip-Synching Real-World Projects
Animation and Motion Capture Working Together Robin Akin and Remington Scott
has often been seen in a competitive role with traditional keyframed animation, this chapter will deal with using keyframe animation and motion capture together to create complex animation more efficiently than either method could on its own. In general, motion capture data can be used as a base onto which keyframed animation can be built. The complex motion of human (or humanlike) characters can be blocked out very quickly by a live actor in a mocap session. Onto this base layer of animation, you can add keyframed animation in a nondestructive manner, which allows you to add personality to the character, move certain body parts, and otherwise adjust the motion capture animation to fit the needs of a specific scene. We'll cover methods of combining these two forms of animation, which have been used successfully in a number of films, and we'll include some exercises that will put the theory into practice.
What Is Motion Capture? Motion capture, commonly referred to as mocap, is digitally capturing the motion of something in the real world and translating the end result into data that can be applied later in animation software. Many animation houses and visual effects studios use motion capture as an animation tool in a variety of ways. You'll find motion capture in films, commercials, and video games and in just about any area of computer graphics that requires realistic human motion. More often than not, you'll see human movement mocapped, but you can digitally capture even the motion of inanimate objects such as cameras. You can also mocap props to help with positioning in a scene.
© 2002 FFFP
86
C H A P T E R 4 • Animation and Motion Capture—Working Together
Sensors capture the data and then plot the data's individual coordinates in space. The most popular systems use optical or magnetic sensors. Optical systems involve special cameras that surround the capture area, facing the subject. Small reflective spheres are placed in strategic points on the body, and the coordinates of each ball in 3D space are captured and translated into data that can be read into an animation program. Magnetic systems rely on cables to transmit the data of the coordinates. Another term you'll hear in relation to mocap is performance animation. Performance animation typically refers to mocap that focuses on human motion, for example, the specific style of a famous person's movements. Often, the point of choosing performance animation as a technique is to capture the nuances of the movements specific to a personality. Andre Agassi's tennis moves or the distinct dancing of Michael Jackson are two examples of celebrities who have been mocapped for the purpose of recording their distinct style of movement.
On the Technical Side In an optical mocap system, at least two cameras must see each marker in order to triangulate the position of that marker in a calibrated volume or zone. Generally, the more cameras in an optical system, the more (additional, repetitive) coverage of each marker, lf a marker is obscured from the view of the cameras, a gap appears in the marker data until the marker can be seen from at least two cameras again. Gaps in the data are not easy to deal with, because this missing information is a portion of your subject's action. Filling these motion gaps has been tedious in the past, but improvements in camera resolution, image quality, and software are solving these technical issues.
Animation and Mocap As we mentioned earlier, mocap is a tool you can use to get the desired animation result. If we could get the word "tool" to blink on a printed page, we would. Motion capture, like computer animation in general, is simply a means to an end. In our experiences, at times mocap data needs to be "massaged" to get the intended result, but at other times this tool has been a huge time saver when the action matched exactly what was planned for the animation. In those cases, mocap buys time for more challenging shots, such as those that might need to be keyframed using traditional methods. The decision to use mocap does not always mean replacing traditional keyframing. Often, you can combine the two methods or use mocap only for shots when it's useful or appropriate. That was the established approach in productions such as Titanic and Final Fantasy: The Spirits Within. As they say, "necessity is the mother of invention," and over the years, necessity has brought about the development of different types of hybrid motion capture and keyframe animation processes and pipelines. We have found that the best-case scenario is a situation in which the animator has a setup that allows motion capture and traditional keyframing to be combined as the animator sees fit. Using this approach, you can override the mocap when it's needed, and you have more control over when to use each method. In the end, you are the judge of whether the result was what was intended. It's best to have as much flexibility as possible.
When Should You Use Mocap?
When Should You Use Mocap? Mocap generally shines when used to animate human or anthropomorphic, bipedal characters. In some circumstances, it has been used to realistically re-create the motions of animals. Motion capture is not always a good choice when it comes to humans enacting the motion of other creatures. For example, you may recall the "guy in a suit" syndrome of Godzilla. At one point, mocap was chosen to animate Godzilla, but in the end it was reconsidered, and animators keyframed the creature. This was primarily because the character's skeletal structure and the structure of a human simply weren't compatible enough to get believable results for that time. The same is true of characters that squash and stretch, such as cartoony characters. In its present state, mocap can't handle squash and stretch well. They're just two different and "too different" styles. Used for some time for background characters, crowds, or stunts, mocap is fast becoming a desirable method that filmmakers use to create realistic digital performers. At Square USA, we used mocap to create lifelike human performances for the animated characters in Final Fantasy: The Spirits Within. What if a movie has wrapped, but a scene needs to be changed and the actor is no longer available? A digital version of that actor may be the solution. Or, what if an actor must interact with a digital element such as a humanoid alien creature that needs to move realistically? Consider using motion capture. If an actor is placed in a dangerous situation or if that actor must perform an action that they cannot do without harming themselves, mocap can be a viable alternative. For example, in a scene from the feature film Final Fantasy: The Spirits Within, the lead male character Gray jumps through a glass window, falls from more than 25 feet onto solid ground, tumble rolls forward, and gets up. A jump from three stories onto a hard surface could seriously injure any one, even a stunt man. To complicate matters, we didn't want to use a stunt man because the motion of a stunt man would not match that of the actor playing Gray. We solved this by dividing the motion into three segments. In the first segment, the actor breaks through the glass and jumps from a platform of about 4 feet in height; in the second segment, Gray's fall is keyframed; in the third segment, the actor again jumps from the platform, only this time he focuses on the landing and his reaction to the fall. We then blended these three segments to form one continuous action. Here's a short list of some ways that mocap has been used to create realistic motion in feature films: • • • • • • • • • •
Gladiator, crowds in the Roman colosseum Titanic: passengers on deck and stunt characters The Patriot: battle scenes The Mummy: crowds and mummy characters The Mummy Returns: principal actor SFX shots, pygmy mummies, and Anubis creatures Star Wars: The Phantom Menace: robots and humanoid creatures Lord of the Rings trilogy: battle crowds, Gollum, principal actor SFX shots, and distant Orc shots Enemy at the Gates: CG soldiers and crowd scenes Pearl Harbor: digital sailors Final Fantasy: The Spirits Within: human character animation
87
88 C H A P T E R 4 • Animation and Motion Capture—Working Together
A New Medium of Expression? Think of this sidebar as an acknowledgment to the animator's alter ego, the mocap performer. Despite the popular beliefs of many in the entertainment industry, realistic digital actors will not replace human actors. When you use motion capture, an actor performs the action that helps bringthe digital character to life. At the foundation of the fusion of motion capture and animation is the motion-captured performance: an exact digital recording of an actor's motions. The essence of the performance is the emotional, psychological, and physical presence of the actor and their relationship to other performers, the environment, and the audience. Can motion capture be a new medium of expression for the next generation actor? Advances in digital technologies and motion capture have allowed actors to expand beyond Hollywood typecasting such as what you look like. Actors can explore new opportunities and create memorable performances and characters that they would never have been able to play previously because of ethnicity, age, or look. The success of motion capture is based on the performer's talent to emote through their actions instead of appearance. lf the final result of your animation is to be cartoon like and you use motion capture as a foundation, work with a very animated performer, keeping in mind that you will most likely need to interpret and enhance the performance, especially in situations that require squashing and stretching of the character. However, if you are looking for a realistic performance, steer clear of actors who cannot express themselves through subtleties of motion.
Using Motion Capture as Reference or Straight out of the Box To animate a character believably, seasoned animators act out their shots and use reference material to observe timing and weight. They can then apply those observations when animating a character. Just as video reference and mirrors are useful to animators, mocap can be a good reference. At times, the motion capture is near perfect, and with just a minor cleaning, the shot's complete. At other times, you need to significantly change the shot. You might need to solve technical problems with the data , or you might need to slightly alter the character's behavior, which ends up requiring extensive hand keyframing. Even in these cases, the mocap can be useful as a reference for timing. The strength of motion capture is its ability to record fine details of motion. Most optical systems record markers on a sub-millimeter level, allowing for an extremely high fidelity of subtlety and nuance in your animation. Mocap has been used extensively in the past for shots that include fighting, running, stunts, and other broad-motion actions; however, with the capability to capture the slightest movement, this technology enhances effects shots that require a heightened sense of reality. The decision about whether to use mocap straight out of the box, meaning using the captured motion without changing it, often depends on the circumstances surrounding the shot.
Using Mocap for Reference Studios such as ILM have used motion capture for character reference, as have animators at Square USA, Digital Domain, and other studios. Digital Domain's use was most apparent in
Using Motion Capture as Reference or Straight Out of the Box
James Cameron's film, Titanic. Titanic was a significant film, especially in the computer graphics industry, for its visual effects. Some of those effects were produced by animating human characters through the use of keyframing, motion capture, and at times a combination of mocap and keyframing. Mocap was used mainly in two ways. For normal actions, such as people walking around on the deck, it was often used directly. It was used as reference for some of the stunt work such as characters climbing on the railings.
What About Rotocapture? Rotocapture is a term that was coined to describe an early technique for combining mocap reference and keyframe animation. Mocap was used as a reference, and the animation model was "rotoed," or posed into position over it.
Rotoscoping and Motion Capture as Reference? The theory of using mocap as reference is similar to using live-action reference footage when animating humans and animals. This concept is covered well in the classic animation reference, The Illusion of Life, by Frank Thomas and Ollie Johnston. This book chronicles the development of Disney animation, is a valuable resource for animators, and is an excellent addition to any animator's library. The Illusion of Life has been out of print for a while, but with luck, you can probably find it for sale online and through used book stores. Chapter 13, "The Uses of Live Action in Drawing Humans and Animals," covers how to use reference material. The authors stress several concepts, including how important it Is to use reference footage as a guide, but not necessarily directly as the animation. They also describe how helpful the footage can be when you are studying and perfecting certain actions.The development of Disney's Snow White was helped a great deal by animators who had the opportunity to study human motion through rotoscoping. Debate continues to this day as to whether or just how much of Snow White's motion was handled with this technique. The Seven Dwarfs were animated traditionally, and live-action film was used as resource material. It helped in developing character gestures and attitudes, as well as in examining the intricacies of a human being's actions. You can learn a lot about the subtleties of movement by examining liveaction footage in detail. Motion capture can be equally helpful when approached with the same mindset.
Rotocapture and rotoscoping are similar techniques in that they both require an artist to animate on top of existing reference material. The main difference is that in rotoscoping the artist animates over a 2D plate or image, and in rotocapture the artist animates over 3D motion data. The limitation of rotoscoping is that you have only one perspective to use as reference, whereas with rotocapture you can move the camera virtually anywhere in 3D space. Animators have encountered two problems with rotoscoping: it can be extremely timeconsuming, and it can be creatively frustrating. In 3D animation especially, rotocapture is becoming more antiquated with the advancement of motion capture technologies and animation pipelines. These improvements let you use motion capture as a basis for your animation. In essence, mocap delivers the basic elements of weight and timing with a foundation grounded in real-world physics. You can layer keys on top of motion capture data and add layers of more creative and thoughtful elements to the character's performance. You can also use the raw mocap as reference for timing in animating characters, while preserving the ability to keep more creative control over the process.
89
90
C H A P T E R 4 • Animation and Motion Capture—Working Together
Recommended Links and Other Resources You can learn more from the following sources. You'll also find a great deal of useful information about motion capture and keyframed animation on the web. • • • • • • • •
FILMBOX,at www.kaydara.com/ Motion Analysis, at www.motionanalysis.com/ VICON,at www.vicon.com/ Biovision,at www.biovision.com/ House of Moves,at www.moves.com/ Motion Analysis Studios, at www.performancecapture.com/ The Illusion of Life: Disney Animation, by Frank Thomas and Ollie Johnston (ISBN-0-7868-6070-7) All of Eadweard Muybridge's photographic studies of figures and animals in motion
Motion Capture and Final Fantasy: The Spirits Within Hironobu Sakaguchi, the director of Final Fantasy: The Spirits Within, believed his vision could only be told through realistic human digital animation. The story required that the emotional range of the characters be genuinely human, breaking away from animation's tradition of exaggerating character actions. Early in preproduction, we decided that motion capture would play a central role in creating realistic digital humans, but we were also dedicated to the skills of the animators and wanted to ensure that the animation department was as actively challenged as the motion capture department. The big hurdle we faced early on was: what would we keyframe and what would we capture? This question was answered in two ways. First, we decided to use mocap for human body skeletal movement, as seen in Figure 4.1, but for the complex facial expressions, mocap could not equal the quality of the performances that the animators created.
All animation for Final Fantasy: The Spirits Within needed to be completed wlthin just over a year and half, and that goal was met. Production for the entire movie, however, spanned approximately four years.
Second, we decided to ensure that the animation department had the necessary tools to animate over the motion capture, in order to tweak it as a whole or in parts. Square USA developed a pipeline to ensure that the animation and mocap departments could work together. Square USA created a proprietary toolset in Maya that allowed a "hybrid" motion capture and keyframe animation process. Using the methodology of a motion-animation pipeline that supported a workflow of motion capture and animators working together, we were able to decide on a per shot level how much mocap to use straight out of the box and how much to animate over or replace completely. It wasn't always one or the other—all mocap or none. For the humans, most of the time it was a blend. For example, we might tweak some parts of the body for certain shots. Aki's hips might need more rotation, or parts of the body, such as the arms, might be keyed, and others, such as the legs, might be mocapped. Sometimes we combined keyframe animation with mocap to accommodate a change in a character's action.
Cleaning Mocap
Figure 4.1: Square USA utilized motion capture to create realistic human subtleties and nuances for the performances of computer-generated characters in Final Fantasy: The Spirits Within. Copyright © 2001 FF Film Partners. All rights reserved. Square Pictures, Inc.
When details are overlooked in the capture session, the result can mean more work than anticipated for the animation department. Generally, the primary reason for unnecessary changes to the motion capture data is that intensive preproduction development has been overlooked or glossed over. You must plan every aspect of your character, from what they say, to who they interact with, and how they do it. Missing small details in casting, timing, set design, dialogue, or interaction will catch up with you in motion editing or post motion capture animation production and affect your delivery schedule and your budget.
Cleaning Mocap You might have heard that after a motion capture session is completed and the data is handed to an animator, it still needs cleaning. In a large production pipeline, a service bureau or motion capture technicians may have done some cleaning before an animator deals with the mocap data. Depending on your pipeline, the data may still need some attention to improve the quality of the result. What does it mean to "clean" mocap? Standards of clean vary. For our purposes, we define clean as data that preserves the key poses of the performance and does not have technical issues such as flipping joints or noisy vibrations in the curves.
91
92
C H A P T E R 4 • Animation and Motion Capture—Working Together
Mocap data can be very clean if the conditions at the time of capture are ideal, but that's not always the case, unfortunately. In the past, an animator often ended up massaging the data one way or another to get the desired result. You need to make sure that the motion capture team you work with can provide clean data. Ask for sample data to see how much, if any, clean-up you need to do. Your data might need to be cleaned for many reasons, but the motion capture team should be experienced enough to deliver clean data to you. The quality of your data has a lot to do with the artistic and technical capabilities of the motion capture data tracker and editor.
Things to Watch For The mocap process usually involves optical markers or magnetic sensors on the body, and these markers and sensors need to be fastened securely. Often it can be difficult to prevent the unwanted vibration of some markers. For example, you might have a sensor or a marker on a foot that still wobbles a small bit when the actor moves around. This small vibration shows in the data as noise (see Figure 4.2). Magnetic sensors may show metallic or magnetic interference in their curves. In Maya, noise is seen in curve data in the Graph Editor, usually as regular peaks and valleys. Looking at mocap data in the Graph Editor is much the same as looking at an audio curve's frequencies. Filters help average and eliminate noise, but you must apply them carefully. They can over-process or wipe out subtle performance details.
a.
b. Figure 4.2: In the top image, motion capture data shows noise. In the bottom image, a cleaning filter operation has been applied to the curve to minimize the noise.
Cleaning Mocap
Beware of over-cleaning and over-filtering. Noise reduction filtering is often a necessary part of the clean-up process. However, you can over-clean mocap to the point that it looks strange, unnatural, and "floaty" (the character seems to float along instead of having a sense of weight to its motion). When looking at the curves, determine if you need to clean the entire curve or only a segment of it. This is where selective cleaning comes in. Modifying the curve should affect only problem areas, preserving important details and poses.
Because noise is typically a high-frequency jitter, sometimes it's more obvious in a highresolution skinned model than in a low-resolution animation proxy model. If in doubt, doublecheck the final animation on a skinned character before calling it finished.
Blending with Keyframe Animation The trickiest part of blending mocap and keyframed animation is to keep a consistent look. Be sure that one shot doesn't look purely mocapped and another handkeyed with a completely different style. If you take a look at the distinctly different styles of two animators that animate the same character, you'll get the idea. You can modify mocap in several ways, and you can blend it with animation. We'll look at some examples in the next section. Figure 4.3 shows an illustration of the model with some of the controls displayed in the Layer Editor. In the examples that follow, you will use the action of a glamorous, well-poised female model walking the catwalk. Each example concentrates on using motion capture and modifying it with keyframe animation. We won't get into detail about how the character is set up, because there are too many methods. That's a whole chapter in itself and is dealt with elsewhere in the
93
94 C H A P T E R 4 • Animation and Motion Capture—Working Together
Figure 4.3: The model with some of the controls displayed in the Layer Editor
book (see Chapter 1, for example). We designed these examples to have a plug-and-play feel to them so that you can simply load the mocap and start animating on top of it.
Offsetting Existing Mocap
The motion capture that you will use for these examples is that of our model walking a straight line on a flat surface. The walk is just right for the shot, but the director wants a bump in the surface. For the first example, you'll use the mocap data and create an animation over that, featuring the model's reaction to the surface's change in height.
Cleaning Mocap
If you were to animate the mocap skeleton, you would edit the keys in the motion capture f-curves, which permanently affect the mocap data. We highly recommend not doing that because of its destructive nature. In this example, we'll animate an offset skeleton that is driven by the mocap data. Using an additional skeleton, you can adjust the mocap without destroying it. You can use a setup in which the identical skeleton—the offset skeleton—has its IK handle, joints, and pole vectors point-constrained to locators that are parented to corresponding joints on the mocap skeleton. When there are no adjustments to the offset skeleton, the two skeletons are in perfect alignment and overlap. When you move the locators of the offset skeleton, the second skeleton becomes apparent. You can animate the controls to literally offset the mocap in order to get the desired result for the animation. Essentially, you animate slight changes on top of the mocap data while retaining the capture data's purity. This example illustrates how slight changes to the offset skeleton drastically alter the action of the character without affecting the motion capture data. A practical use of the offset skeleton involves slight retargeting of the controllers. For example, you capture an actor reaching for a doorknob, turning it, and opening a door, but in the scene file the doorknob is at a different height than the original. Simply move the offset locator for that hand to the new location, and the skeleton will follow. The f-curves are not deleted, nor is the capture data destroyed.
We would like to thank Florian Fernandez for providing the animation model, setup, and MEL script on the CD that accompanies this book. You can visit his web page at www.flo3d.com. We would also like to acknowledge Spectrum Studios for providing the motion capture data.
Load the Scene File
From the CD, load the scene file W a l k . m b . When you play the animation, which is driven by mocap data, you should see a low-resolution model of a woman walking. For most optical motion capture recording sessions, the action is captured at a frame rate in the range of 60 to 120 frames per second. The reason for such high-speed capture is to ensure that the markers do not blur. The technology behind tracking markers accurately
95
96
C H A P T E R 4 • Animation and Motion Capture—Working Together
depends on the software being able to find the center of the marker. If a marker blurs, it becomes elongated, and the position of the center of the marker is less accurate, causing jitters in the data. The mocap for these examples was captured at 60 frames per second, but you'll need to change the frame rate to fit either the film (24 frames per second) or the NTSC (30 frames per second) standard for playback. In this example, we'll set the frame rate to video NTSC, which is 30 frames per second. You will also want to turn off the Auto Key feature and this can be done in the Preferences window also. To adjust preferences, follow these steps: 1. From the Marking menu, choose Window open the Preferences dialog box:
Settings/Preferences
Preferences to
2. From the Categories list, select Settings to open the Settings: General Application Preferences window. In the Working Units section, set the Time field to NTSC (30fps). Also, make sure that Linear is set to Foot. At the bottom of the window, click the Save button. 3. Turn off Auto Key in the Keys window under Settings. Clear the Auto Keys check box. The motion capture animation now ends at frame 191. Make Sure That the Control Components Are Visible To make the control components visible, follow these steps: 1. In the Scene Menu bar, LM click Show to display a drop-down menu. Make sure that the NURBS Curves check box is checked. You can't see any of the controls if the NURBS Curves check box is not checked. Scrub the Time Slider, and you will see that, along with the low-resolution model walking, there is a letter C in a cube The C is the Control box, a toggle switch that changes control from the offset skeleton to the animation skeleton. You won't use this element in this example. 2. In the Channel box, click the middle icon to display the Layer Editor.
Cleaning Mocap
3. Scroll down to the animControlsL, animControlsR, and animControls layers. Turn off visibility for each layer by clicking the V in the box to the left of each layer name. The result is that the animation skeleton in the scene is hidden. 4. Select the cube with the letter C in it. In the Channel box, as shown in Figure 4.4, make sure that Mocap_keyframe is set to 0. This value toggles the offset or animation skeletons to fit onto the mocap skeletons. Once the value is set to 0, you will not need to do anything more with this Control box.
figure 4.4: The Control box
Notice in the workspace window the five yellow cubes on the model's body: two at her wrists, two at her ankles, and one at her root. These are the offset controllers that we will animate. All the cubes move in X, Y, and Z, but only the root cube moves and rotates. To make the cubes easier to select for animation, follow these steps: 1. Select the yellow cube at the root, and make sure that nothing else is selected. From the Marking menu, choose Display Component Display Selection Handles to display a small selection handle above the geometry in the center of the cube. The selection handle always appears on top of the geometry. When the selection handle is selected, you are controlling the root offset controller. Dragging a selection box over the root area selects only the selection handle of the root offset controller.
97
98 C H A P T E R 4 • Animation and Motion Capture—Working Together
2. Select the yellow cubes at the wrists and ankles. Choose Display Component Display Selection Handles to display all the offset controllers. This makes selecting and deselecting these nodes easier. Drag a selection box around the model's body. All the offset controllers are selected, and none of the geometry or joints are selected. Figure 4.5 displays the selection handles for offset controllers. Lock All Rotation and Scale Values for the Wrists and Ankles To lock the rotation and scale values, follow these steps: 1. Select the offset controllers for the wrists and ankles. In the Channel box, click and drag down to select the Rotate X, Rotate Y, Rotate Z, Scale X, Scale Y, and Scale Z attributes. RM click and hold to open a pop-up menu. Select Lock Selected to lock the attributes and gray them, as shown in Figure 4.6. Since the root offset translates and rotates, we will only lock the scale attribute for that node. 2. Lock the Scale attributes for the root offset controller. 3. Save this scene as O f f s e t . m b . You will use this scene for the examples that follow. Animate the Offsets Before we start animating our model walking over the bump, let's play a little with the offset controllers to see exactly what they do. First, set a key on the five offset controllers in order to retain their original positions. Follow these steps: 1. Select all of the model's offset controllers, go to frame 0 in the Time Slider, and set a key.
Figure 4.5: Displaying selection handles for offset controllers
Cleaning Mocap
Pressing s on the keyboard is a shortcut for choosing Animate Key.
99
Set
2. Select the model's root and wrist offset controllers. Set keys for these nodes at frames 14, 35, 56, 78, 101, 122, 143, 165, and 187. 3. With the root and both wrist offset controllers selected, go to frame 25. Use the Move tool to translate the model's upper body down so that her knees are slightly bent. Set a key. 4. To copy this key to frames 45, 68, 90, 111, 133, 153, and 176, first make sure you are on frame 25 in the Time Slider. You can press the comma key to jump forward to keys in the timeline, and you can press the period key to jump backward. MM click anywhere on the Time Slider and hold. (MM dragging in the timeline lets you change time without updating the scene.) Drag your mouse to frame 45 and release. Set a key. You have copied frame 25 to frame 45. Continue to copy this key to the remaining frames. Play the animation. The model looks as if she has a heavy weight on her shoulders because of the bounce we created in her walk. You can adjust the f-curve of her bounce in the Graph Editor. Follow these steps:
Figure 4.6: Lock the attributes on the wrist and ankles for Rotate X, Y, Z and Scale X, y, Z. Also lock the attributes for the root offset controller for Scale X, Y, Z.
1. Select the model's root and both wrist offset controllers. In the Marking menu, choose Window Animation Editors Graph Editor to open the Graph Editor. 2. Hold down the Ctrl key and select the Translate Y nodes for armOffsetL and armOffsetR. Also Ctrl+select the Translate X node for the rootOffset node. We set up this character with the XTranslation used for vertical motion on the rootOffset, rather than YTranslation. The only time this appears is in the rootOffset node.
3. In the Graph Editor, shown in Figure 4.7, you should see the f-curves representing the model's root and right and left wrist offset nodes forming a sine-like wave between 0 values and negative values. LM click and drag a Selection box around all the keys that appear in the Graph Editor in the negative value range. All the keys that appear in the 0 value range represent the model's offset controllers in the normal stance. All values that appear in a negative value represent her offset controllers in the squat position. 4. With the negative value keys selected, select the Move tool, and then Shift+MM click inside the Graph Editor to display an icon that consists of an arrow and a question mark. This will constrain your movement of the selected keys to either a vertical or horizontal movement within the Graph Editor. By moving the mouse cursor upward, you choose the vertical constraint, and the keys move closer to or farther from the 0 value in the Graph Editor. Shifting the negative value keys closer or farther from the 0 value makes the model's root and right and left wrist offset nodes drive her upper body up or down, which affects the amount of bend in her knees. Adjust the keys according to your preference to make the model look as if she is carrying a heavy burden on her back (see Figure 4.8). When you're ready to move on, save this scene as H e a v y W a l k . m b , and load O f f s e t . m b .
100
C H A P T E R 4 • Animation and Motion Capture—Working Together
Figure 4.7: In the Graph Editor, select the keys that are in the negative value range.
Figure 4.8: Translate the model's offset controller keys closer to the 0 value to lessen her upper body plunge (or move them farther from the 0 value to increase her drop). Animating Walking over a Bump in the Terrain To make the bump, follow these steps: 1. Choose Create Polygon Primitives Cube.In the Channel box, set Translate X to 7, set Translate Z to -3, set Scale X to 4, set Scale Y to 0.5, and set Scale Z to 1.5. When you now play back the animation, you will see the model's foot pass through the cube as she walks (see Figure 4.9).
Cleaning Mocap
Figure 4.9: Notice that at frame 50 the model's left foot sinks into the cube.
Figure 4.10: At frame 50, the model's left foot lands on the platform and does not sink through it. The ankle is offset by a value of30 in Translate Y. Now, let's make the bump affect the model's left foot when she places it down at about frame 50 (see Figure 4.10). Follow these steps: 1. In the Perspective window, select the left ankle offset controller. Go to frame 30 in the Time Slider and set a key. This is the first frame of animation for this offset. 2. Go to frame 36. In the Channel box, enter a value of 30 in the Translate Y field. Set a key.
101
102 C H A P T E R 4 • Animation and Motion Capture—Working Together
Play the animation. It looks as if the model raises her foot starting at frame 30 and steps on the bump at frame 50. However, she continues to walk for the remainder of the animation with her left foot on an invisible elevated platform and her right foot at ground level. Your animation should look like the movie b u m p L e f t L e g P l a t _ s i d e V i e w . m o v on the CD. Let's bring our model's left foot down so that she appears to walk only on a bump and not on a platform (see Figure 4.11). Follow these steps: 1. In the Time Slider, MM copy frame 36 to frame 72. Set a key. This ensures that the model's foot is raised between frames 36 and 72, the duration of time that this foot is on the bump. 2. In the Time Slider, MM copy frame 30 to frame 86. Set a key. At frame 86, the model's left foot is back in the normal offset position. Your animation should now look like this movie b u m p L e f t L e g _ s i d e V i e w . m o v on the CD. Switch to the Side window. As you play the animation, notice that although the model's left foot rises to step on the bump, her right foot does not elevate and appears to move through the bump. Let's fix that (see Figure 4.12). Follow these steps: 1. In the Perspective window, select the right ankle offset controller and set keys on frames 50 and 69 in the Time Slider. 2. Go to frame 56. In the Channel box, set a value of 60 in the Translate Y field. Set a key. Your animation should now look like the b u m p L e g s O n l y _ s i d e V i e w . m o v movie on the CD. Our model's feet are looking better, but her upper body needs to react. Let's adjust her root (see Figure 4.13). Follow these steps: 1. Select the root offset controller. Set keys on frames 50 and 84 that define the duration of time for her root animation. Next, we will translate her root offset up.
Figure 4.11: After frame 86, the model's left foot is positioned back on ground level.
Cleaning Mocap
Figure 4.12: The model's right ankle is raised by a value of60 in Translate Y. Her foot no longer intersects the geometry of the platform.
Figure 4.13: Translating the model's root offset controller affects her whole upper body.
Remember that on thls model, the rootOffset's Translate X value takes the place of the Translate Y value. Therefore, when you translate the root offset controller up and down, enter values In the Translate X field in the Channel box.
2. With the root offset controller still selected, go to frame 56 and set a value of 35 in the Translate X field of the Channel box. Set a key.
103
104 C H A P T E R 4 • Animation and Motion Capture—Working Together
3. In the Time Slider, go to frame 68 and set a value of 9 in the Translate X field of the Channel box. Set a key. The keys you just set have lifted up the model's root. Now let's add a key that acts as the follow-through for her body weight coming down. 4. Go to frame 75 and set a value of -18 in the Translate X field of the Channel box. Set a key. Now let's work on making the model's hands move with the rest of her body. Follow these steps: 1. In the Perspective window, select both the right and left wrist offset controllers. Set keys on frames 40 and 82. 2. Go to frame 56. In the Channel box, set a value of 35 in the Translate Y field. 3. At frame 69, set a Translate Y value of 10, and at frame 76, set a Translate Y value of-13. Play the animation. She walks and steps on top of a bump that wasn't there when the action was captured. Let's fine-tune the animation. You might have noticed that the model's left foot sinks slightly when she steps on the bump. Let's fix that in the Graph Editor. Follow these steps: 1. Select the left ankle offset controller. Open the Graph Editor, and select the Translate Y value under the legOffsetL node. You will see the Y translation curve. If you do not see the f-curve in its entirety, in the Graph Editor choose View Frame All, or for a shortcut press the A keyboard key. 2. LM drag a selection box around the keys you placed at frames 36 and 72. Both keys are selected, and the selection handles appear for each selected key (see Figure 4.14). 3. In the Graph Editor, choose Tangents Flat from the drop-down menu (see Figure 4.15). For a final touch, add rotation to the model's hips (see Figure 4.16). Follow these steps: 1. Select the root offset controller, and go to frame 56 in the Time Slider. Select the Rotate tool, and rotate the root offset controller so that the model's left hip is higher than her right hip. Set a key. 2. Go to frame 75, and rotate the root offset controller so that the model's right hip is higher than her left (see Figure 4.17). Set a key. 3. Save this file as B u m p . m b .
Figure 4.14: In the Graph Editor, select frames 3 6 and 72 on the legOffsetL Translate Y curve.
Cleaning Mocap
Figure 4.15: Flattening the tangents creates a plateau in the curve, which stabilizes the model's foot and places it firmly on the ground.
figure 4.16: Creating hip rotation. The more the hip rotates, the more swagger and attitude. Play your animation. It should look like the b u m p C o m p l e t e _ s i d e V i e w . m o v and b u m p C o m p l e t e _ p e r s p V i e w . m o v movies on the CD. To get a better view of your animation, hide the mocapSkeleton, offsetControls, and offsetSkeleton in the Layer Editor. You should see only the bindSkeleton layer. With these settings, there is no additional information on the character to distract your eye when viewing her animation.
An offset controller is a subtle but powerful animation tool when working with mocap. You can use it to animate small changes in the motion capture character in a speedy
105
106
C H A P T E R 4 • Animation and Motion Capture—Working Together
Figure 4.17: Rotate the hips so that the right hip is higher than the left.
and efficient manner. If you are not happy with the results, you can delete keys from the offset controllers without affecting the mocap data.
Using the Trax Editor to Animate on Top of Existing Mocap Data
In this exercise, we will again work with our model that walks with poise and attitude. This shot would be perfect if not for one small detail: the director wants the model to blow a kiss and wave to her fans. In the past, this would have been a time-consuming endeavor because motion capture data has keys on every frame. You would have had to eliminate and reduce keys and then key the rotation of the joints in her shoulder, arm, and hand. However, with the Trax Editor, you can create an overlay animation of the model raising her arm and waving without destroying the integrity of the original motion capture. The Trax Editor is a nonlinear animation sequencer that you can use to layer and blend clips from animated elements and overlap them to create new motions. The Trax Editor can be quite useful when you want to layer keyframed animation over motion capture data. To start, load the scene file that you saved earlier in this chapter, O f f s e t . m b .
Cleaning Mocap
Workflow for the Trax Editor and Motion Capture You can animate over motion capture in many ways (the first example in this chapter illustrates one other method), but one of the most effective and simple procedures uses the nonlinear animation features of the Trax Editor, especially since Trax doesn't destroy your original data. For the purpose of this example, we already created a clip of the mocap.
Your workflow for the kiss and wave actions is as follows: 1. Lock all nonanimation attributes. 2. Animate the character blowing a kiss and waving good-bye using Rotation X, Y, and Z values on the model's right arm. 3. Create a clip of the kiss and wave. 4. Shift the kissNwave clip's starting time. 5. Adjust the f-curve keys in the Trax Editor's Graph Editor.
107
108 C H A P T E R 4 • Animation and Motion Capture—Working Together
Lock Non-Animation Attributes It is good practice to lock all attributes that you will not key, thus reducing the clutter of keyframes resulting from your work. In this example, we'll animate only the rotational values of specific joints. We will not animate the Translate, Scale, and Visibility values in the Channel box; therefore, we'll lock these values to ensure that we don't accidentally adjust one. Follow these steps: 1. In the Outliner, expand the mocapSkel node until you have the following nodes visible: R_collar, R_shoulder, R_elbow, R_wrist (see Figure 4.18). 2. Select R_collar, R_shoulder, R_elbow, and R_wrist. You can do this by clicking the R_collar node, and with the mouse button still pressed, drag the cursor down onto R_wrist.
Figure 4.18: Expand the mocapSkel node. You will be setting rotation values in the Channel box for the R_collar, R_shoulder, R_elbow, and R_wrist joints.
Cleaning Mocap
3. With the R_collar, R_shoulder, R_elbow, and R_wrist joint nodes still highlighted, in the Channel box select Translate X, Translate Y, and Translate Z. RM click on top of the selection to display the Channels dialog box. Scroll down and select Lock Selected. The numeric values for Translate X, Translate Y, and Translate Z are shaded and are now locked. 4. In the Channel box, select Scale X, Scale Y, Scale Z, and Visibility. Select Lock Selected for these values also. The numeric values for Scale X, Scale Y, Scale Z, and Visibility are shaded and locked. Set Keys to Define the Animation Range When animating over an existing clip, it is extremely important to set "neutral" keys at the head and tail of your new animation to define a beginning and an end for the range of the new clip. If you don't, the clip and keyframed animation will interact in a bizarre manner. In this example, the clip of the kiss and wave will start on frame 0 and end on frame 132. We will not modify the value of the first and last keys; they serve as start and end poses of the kiss and wave animation that will blend into the animation of the model walking. We will set keys on four joints: the R_collar, R_shoulder, R_elbow, and R_wrist. The R_shoulder and R_wrist joints will have an animation range of 132 frames. The R_collar and R_elbow joints will have an animation range of 111 frames. Even though certain joints do not have the same frame range, the length of the clip will be the length of the longest frame range, 132 frames, when the four joints are selected and made into a clip. In the Timeline, the last key that has been set on any joint defines the duration of the entire clip. Therefore, all the last keys do not have to be on the same last frame. 1. In the Outliner under mocapSkel, select the R_collar joint node, and then Ctrl+click the R_elbow node. Both the R_collar and R_elbow joint nodes will be selected in the Outliner. 2. In the Timeline, go to frame 0 and set a key by pressing the s key on your keyboard. 3. Go to frame 111 and set a key. You have now created an in and out for the animation range of the R_collar and R_elbow nodes (see Figure 4.19). 4. In the Outliner, select the R_shoulder and R_wrist nodes. 5. In the Timeline, set keys at frame 0 and 132. You have now created an in and out animation range for the R_shoulder and R_wrist nodes (see Figure 4.20). Set Keys for the Kiss and Wave Animation We will set keys on the R_collar, R_shoulder, R_elbow, and R_wrist joint nodes to animate the essential movement of the model blowing a kiss and waving. The only transform values we will enter in the Channel box are Rotate X, Y, and Z (recall that the rest of the transform nodes are locked). Set R_collar Keys for Rotate X, Y, Z First, set keys for rotation on R_collar joint. Follow these steps: 1. In the Outliner under mocapSkel, select the R_collar joint node. 2. Set the keys shown in Table 4.1 on the designated frames.
109
110
C H A P T E R 4 • Animation and Motion Capture—Working Together
figure 4.19: Set an in and out range for the R_collar and R_elbow joints.
Figure 4.20: Set an in and out range for the R_shoulder and R_wrist joints.
Cleaning Mocap
Type the values In the Channel box fields.
Your animation should look like the w a v e R _ c o l 1 a r _ p e r s p V i e w . m o v movie on the CD. Set R_shoulder Keys for Rotate X, Y, Z Next you will set X, Y, and Z rotation keys for the R_shoulder joint node. Follow these steps: 1. In the Outliner under mocapSkel, select the R_shoulder joint node. 2. In the Channel box, set the keys as shown in Table 4.2. Your animation should look like the w a v e R _ _ s h o u l d e r _ p e r s p V i e w . m o v movie on the CD. Set R_elbow Keys for Rotate X, Y, Z Next you will set X, Y, and Z rotation keys for the R_elbow joint node. Follow these steps: 1. In the Outliner under mocapSkel, select the R_elbow node. 2. Set the keys shown in Table4.3. Your animation should look like the w a v e R _ e l b o w _ p e r s p V i e w . m o v movie on the CD. Set R_wrist Keys for Rotate X, Y, Z The R_wrist joint node is the last joint for which you will set X, Y, and Z rotation keys. Follow these steps: 1. In the Outliner under mocapSkel, select the R_wrist node. 2. Set keys on the suitable frames, as shown in Table 4.4. Your animation should look like the w a v e R _ w r i s t _ p e r s p V i e w . m o v movie
111
112 C H A P T E R 4 • Animation and Motion Capture—Working Together
on the CD. Scrub the Timeline or play the animation to see the model raise her arm, blow a kiss, and wave. Create a Clip of the Kiss and Wave We will now take this chain of joints and turn it into a nonlinear animation clip. Follow these steps: 1. In the Outliner, select the mocapSkelCharacter and then select the R_collar, R_shoulder, R_elbow, and R_wrist joint nodes. Selecting the mocapSkelCharacter will associate the clip with that character. 2. Press F2, and then choose Animate Create Clip 3. In the Create Clip Options dialog box, shown in Figure 4.21, type kissNwave in the Name field. Leave all other parameters at their default settings. 4. Select Create Clip. You will notice that the keys from the Timeline have disappeared. They have been moved into the Trax Editor. 5. Choose Window Animation Editors Trax Editor to open the Trax Editor. You will find that kissNwave has been created as a second track under the sexyWalk track. Shift the Start Time of the kissNwave Clip After you create an animation, you don't have to destroy your animation by adjusting keys if you don't like the timing. You can slide the clip on the track to delay the start frame of the animation or start it earlier than you originally planned, or you can even slow down or speed up the actions by scaling the clip. To change the start time, follow these steps:
Figure 4.21: The Create Clip Options dialog box.
Cleaning Mocap
1. In the Trax Editor, select the kissNwave clip. Notice that the clip has numbers on both the head and tail, designating the start frame and the end frame. On the left side of the kissNwave clip is the number 0; on the right side is the number 132. 2. LM click the kissNwave clip, and slide it until the frame start reads 45 and the frame end is 177. Scrub the Timeline. Now the animation doesn't start with the model blowing a kiss. Instead, she takes a few steps and then raises her arm. There seems to be a little more attitude in her behavior since we shifted her kiss and wave until after her entrance. Adjust the F-Curves in the Trax Editor's Graph Editor Although the kiss and wave keys are not visible in the Timeline, you can still manipulate them in the Trax Editor. Follow these steps: 1. In the Trax Editor, select the kissNwave clip, as shown in Figure 4.22. 2. Choose View Graph Anim Curves, or select the icon for the Graph Editor in the Trax Editor pane to open the Graph Editor 3. With the Move tool selected, LM click to select keys and MM click to move them. You can manipulate the f-curves by adjusting keys the same way you would normally in the Graph Editor. 4. Adjust the curves until you are happy with the result. Your final animation should look like the w a v e C o m p l e t e _ p e r s p V i e w . m o v movie on the CD. You have now finished this animation, and you just got approval from the director for your shot, so take a well-deserved break and grab a latte. Then read on to explore another procedure for animating over motion capture with the Trax Editor. Disable the mocapSkel Clip in the Trax Editor We mentioned earlier that you can create animations on mocapped skeletons using two techniques. We used one of these techniques in the previous section when we set keys on a moving
Figure 4.22: Shift the kissNwave clip on the Timeline until you are happy with the timing of the model's wave. Use the Graph Editor to tweak f-curves.
113
114
C H A P T E R 4 • Animation and Motion Capture—Working Together
skeleton driven by capture data. The other technique involves disabling the motion capture clip in order to animate on a static character. Using this technique, you isolate your animation to evaluate its strength on its own merit by turning off the primary body movement. Follow these steps: 1. In the Timeline, go to frame 0. 2. Open the Trax Editor, and RM click and hold the sexyWalk clip, to display a pop-up menu. Clear the Enable Clip check box and the kissNwave Clip check box. Scrub the Timeline. Result: the motion capture clip sexyWalk and animation clip kissNwave are disabled, and therefore the skeleton does not move. You will not lose this motion; it is simply turned off for now. 3. Set keys on the R_collar, R_shoulder, R_elbow, and R_wrist joint nodes. You can create your own wave or use the keys in Tables 4.1, 4.2, 4.3, and 4.4, earlier in this chapter. 4. In the Outliner, select the R_collar, R_shoulder, R_elbow, and R_wrist joint nodes. Select F2, choose Animate Create Clip and enter kissNwaveRelative in the Name field. Click the Create Clip button. Scrub the Timeline. You should see the arm animating while the rest of the model's body stays static. Enable sexyWalk To turn on the model's motion capture, we need to enable that clip. Follow these steps: 1. In the Trax Editor, RM click sexyWalk and check Enable Clip to activate the motion capture movement. Scrub the Timeline. Something is not right. The model's arm is waving erratically and is not in the positions that we set. We didn't key her arm at the origin, and it needs to be relative to the motion capture clip. 2. In the Trax Editor, RM click kissNwaveRelative and check Relative Clip. Scrub the Timeline. Our model is now walking and waving. You can now disable the sexyWalk clip at any frame in the Timeline and analyze the kissNwaveRelative clip moving independently of the motion capture clip. Because we created the animation of the arm with the sexyWalk clip disabled, we can now enable or disable sexyWalk at any frame in the Timeline and analyze the kissNwaveRelative clip moving independent of the motion capture clip. Adjust f-curves accordingly with the sexyWalk clip enabled or disabled. The Next Step We used the Trax Editor to create an overlay animation clip on top of a motion capture walk. Not once did we have to deal with all the keys that motion capture produces. In fact, we used traditional forward kinematics to animate the arm. This exercise should give you a starting place to explore animation with motion capture in the future. Keep practicing with this walk. Try creating a clip of the model turning her head or her torso or waving with her other hand. Use your imagination and the techniques we covered and have fun.
Cleaning Mocap
Combining Mocap with Animation
In this example, we'll combine the mocap that we already have with keyframing, using the simplified hybrid setup you've been using from the CD. The model will walk along and inadvertently stumble over a box. Since the mocap does not include her tripping, we must keyframe this motion on the animation skeleton, blending it with the mocap skeleton's motion. This particular skeleton has only basic animation controls for purposes of demonstration. A production skeleton should have several more options such as forward kinematic controls on the arms and legs, as well as inverse kinematic controls.
Let's get started. Follow these steps: 1. Load the file O f f s e t . m b that you created earlier in this chapter and source the script Snap.mel.
To source the MEL script, open the Script Editor, and choose File Source Script. Browse to the location of the S n a p . m e l script on the CD. Select it, and click Open.
2. At the command line, type snap to open a box that has a Snap Skeleton button and a Reset button, as shown in Figure 4.23. Make a Snap Button To make things easier later, let's make a button for the Snap tool. This tool is a MEL script that will help us get the animation model into position at the point where the transition from mocap to keyframed animation occurs. Follow these steps:
Figure 4.23: The Snap button is activated.
1. In the input area of the Script Editor, type snap, highlight the word, and drag it to the shelf. 2. In the main menu, choose Window Settings/Preferences Shelves. 3. Scroll down the list to find the Snap tool, and in the box next to Icon Name, type snap, and press Enter. 4. Save all shelves and close the dialog box.
11 5
116 C H A P T E R 4 • Animation and Motion Capture—Working Together
Figure 4.24: The model with foot controls selected Make a Button for Selecting Animation Controls Now let's make a button to select the animation controls on the animation skeleton. Start by hiding the layers for the parts of the model that we don't need at the moment. Make sure that the visibility is off for the bindSkeleton, the mocapSkeleton, the offsetSkeleton, and the offsetControls. Follow these steps: 1. In the Show menu in your Scene Menu bar, turn offJoints, IK handles, and Locators. 2. Select the two large boxes that represent the foot controls, leaving the other boxes and controls on the feet unselected, as shown in Figure 4.24. 3. Add to the current selection by holding down Shift and dragging over the remaining boxes and curve controls that are visible above the feet. Leave the large Control box cube next to the model unselected. Make a Button to Select Animation Controls Now we'll create a button to make the animation controls easier to select. Follow these steps: 1. In the Script Editor, highlight the output that describes the selection you just made. Make sure the two large boxes of the feet are included in the list of objects selected. Use this text:
2. Choose File Save Selected to Shelf, and enter a name such as SELECT. 3. Click an empty space in your workspace to deselect everything. Now, let's bring back some of the layers you've hidden, such as the bindSkeleton and the mocapSkeleton. You can use the offset skeleton and offset controls as well later, if you like, but for now, let's keep them hidden to reduce screen clutter. Let's start animating. Select the Control box, and key the Mocap_keyframe attribute in the Channel box to a value of 0.
Summary
Put the bindSkeleton layer in reference mode. This will make it easier to select the animation Control boxes.
Determine Where Mocap End and Keyframing Should Begin Now, let's assume that we want the model to stumble over a box at frame 113, since that's where she is shifting weight for the next step. Follow these steps: 1. To create the box, choose Create Polygon Primitives Cube In the dialog box, type the dimensions of the box, maybe along the lines of a width of 1.5 and height and depth of 0.3. Place the box in front of the model's feet, near the tip of her left toe. As you scrub through the Timeline, you notice the model's feet go through the box at frame 115. 2. Let's place another key on the Control box at frame 112, at a value of 0. At frame 113, key the Control box again, only with a value of 1. This creates the transition of influence from the mocap skeleton to the keyframe animation skeleton. The bind skeleton follows whichever skeleton is keyed to the on position, meaning having a value of 1. The model will snap to the exact position of the animation skeleton in the work space. 3. View your curve in the Graph Editor, and ensure that the first two keys have flat tangents. At this point if you scrub forward, you'll see the body of the model float backward to its initial position, which happens because the influence of the animation skeleton is on at that point, and the influence of the mocap skeleton is off. 4. To help get the body into the position of the last frame of mocap used before the transition, we'll use the Snap tool. Scrub back to frame 114, and ensure that the mocap skeleton is visible in the Layer Editor. Make sure that Joints are turned on under Show in your working view as well. Click the Snap button you made earlier. The bind skeleton and the animation skeleton should both snap to the position of the mocap skeleton. Click the Select button. Set a key by pressing s on your keyboard. The Snap tool is designed to be used only once at the transition point between motion capture and animation. 5. Keyframe the rest of the animation as you would a regular animation skeleton.
Summary In this chapter, we covered the basics of motion capture and discussed methods of working with it. We manipulate the mocap using offsets and the Trax Editor, as well as by adding keyframes to a hybrid animation model to blend traditional key framing with motion capture. You can combine animation and motion capture in many ways, so have fun exploring and try different solutions.
11 7
Lip-SynchingReal-WorldProjects John Kundert-Gibbs, Dariush Derakhshani, and Rebecca Johnson
One Of the most tedious and time-consuming,yet highly visible and critical, tasks in 3D animation is lip-synching prerecorded vocal tracks on human or anthropomorphized characters. Not only is a character's mouth highly visible in most shots, audiences are extremely good at determining whether lip-synched animation looks "right. " Thus arises the daunting task of animating dozens or hundreds of words accurately, yet efficiently enough to fit within a tight time constraint. Oddly, if lip-synching is done properly, it becomes an almost invisible effect; it's only when lip-synching is off somehow that audiences even pay attention to it. Thus, the art of lip-synching is the art of creating an effect that looks so natural that people don't even notice it. Although lip-synching in hand-drawn animation requires exposure sheets or other methods of prefiguring each mouth shape (and expression), you can use today's 3D packages such as Maya to skip this phase if needed (though many productions still use exposure sheets) and use the shape and sound of the voice track within the program itself as a guide. In this chapter, we'll discuss the basic theory of lip-synching and give you some general guidelines for doing it properly and efficiently. We'll then work through two sample scenes, putting this theory into practice. After completing this chapter, you should have a solid understanding of how to produce your own lip-synched animations and be ready to create your own "invisible" vocal animations.
120
C H A P T E R 5 • Lip-Synching Real-World Projects
Creating a Lip-Synched Animation You can do lip-synching in a multitude of ways, but we'll focus on a proven technique that is efficient and allows a great deal of fine-tuning control. With this technique, which uses the power of a number of Maya's software tools, you can build a library of words, phrases, and emotions and then use the Trax Editor to place them in the Timeline based on where a given word falls in a voice track. In addition, with this technique you can control the weight (or amount) of any given word, you can control how long it takes to pronounce the word, and you can even add small touches on top of the basic words as they are spoken. All told, this method is extremely efficient—especially if you need to do a series of animations with one or more characters. It allows for great accuracy in the lip-synching process and is even transferable from one character to another, further saving the animator's time and energy. The one drawback is that it takes a good deal of time and some understanding of the technique to get the whole process up and running. Here is a general outline of how to proceed, using this method, to create effective lipsynch animation. Preproduction Tasks As with all animation work, the first step is preproduction. In preproduction, you create the storyline, script (obviously an important step when lip-synching is involved) the look and feel of the animation, and create storyboards that describe the action. If you are working on a series of similar animations (for example, a Saturday-morning cartoon series), you will likely have a "bible" that contains the general art direction, character descriptions, and possibly technical material. In addition to creating the general look of your models and scenes and creating the dialog, in the preproduction phase you might also define general issues of how your character(s) will speak. For example, you might decide how realistic the lip-synching needs to be and determine your general methodology. A little forethought at this stage can save valuable time later. For example, if the characters are extremely cartoonish and stylized, you might need only simple "open/closed" positions for their mouths. On the other hand, if the animation requires extreme realism, now is the time to face this challenge head-on and be sure you have enough resources to tackle this task. Never overanimate your characters, lf they need only simple mouth shapes, don't build a complex and/or hard-to-use system for lip-synching. You are only wasting your—and your company's—money.
Recording Vocal Tracks Once you make basic decisions about the animation and the dialog track is "locked," or finished, it is time to find voice talent and record your dialog! There are really only two secrets to getting a good voice track for your animation: • Finding talented, hard-working actors, preferably with interesting, unique voices • Getting the cleanest recording you can The number of times otherwise good dialog tracks have been ruined by poor recording is probably too high to count, and even decent recordings often need lots of massaging to fix
• Creating a Lip-Synched Animation
problem areas, all of which leads to wasted time and money and a lot of frustration that can be avoided by creating or renting an appropriate facility in the first place. If you have the budget, by all means rent a recording studio for the time you need it. A good rule of thumb is that you need about an hour of recording time per minute of finished dialog. If you can't afford to rent a studio, see if you can beg facilities from somewhere close by. Often—especially for nonprofit projects—managers of facilities will allow recording sessions for little or no charge. The one problem that often arises in these circumstances is that the recording session has to take place at odd hours, which can be stressful for cast and crew. If you must construct your own recording space, try to keep the following points in mind: • Find the best microphone you can, and never, ever use the built-in microphone on a camera or a camcorder. • Find the acoustically deadest space you can. An anechoic chamber is best; otherwise, use heavy drapes, foam, or other sound-deadening materials to reduce echo. • Remember that floors and ceilings create echo, so lay out blankets, carpeting, or other materials on them. • Listen carefully to your space or do a test recording. Listen for any kinds of hums or "leaking" sound from the outside world. Anything from fluorescent light fixtures to airconditioning can cause a low level of noise in your recording that is difficult to delete. • If possible, have only your voice actors and the microphone(s) in the room. The fewer machines (recording devices, cameras, computers) in the room, the less noise you will get. • Never try to create an effect when recording. For example, don't try to capture a natural echo if your characters will be in a cave. With the audio-engineering software available today, it's extremely easy to add this type of effect, but almost impossible to get rid of an incorrect echo or other effect after it's recorded. The best sound for recording is completely flat and noiseless, save for your actors' voices. Be sure to test your actual actors before you do your final recordings. Often actors' voices—especially stage actors' voices—have an extreme dynamic range, which can cause a poorly adjusted recording setup to clip. It's much better to find this out in a trial recording than after your actors have gone home for the evening! To find voice actors for your dialog tracks, first decide what your voices will sound like. Next, either hire professional voice talent (if your budget allows) or go scouting for amateur actors who are willing to work for the exposure. Someone who is familiar with local community or college theater programs can find you good talent quickly. If you have the good fortune to be able to audition your actors, listen to what they bring to a particular role. Often a well-trained actor gives you a more interesting reading than what you had in mind. You just have to be able to hear that different is better, not worse. A point of some debate is whether to rehearse your cast before doing voice-over recordings. We believe a short rehearsal just before recording is beneficial because it helps actors get into the flow of the scene. Others argue, however, that this rehearsal reduces the spontaneity of the recording. Regardless of whether you rehearse, finding a good director can help get the best out of your actors. During the recording process itself, continue to listen to your actor(s). They will often come up with marvelous spur-of-the-moment line readings that can make a dull line into a memorable or humorous one. One of the best techniques for getting a range of line readings out of an actor is to have them read a line three or four times in a row with just a slight pause between readings. Often this repetition helps them loosen up and try readings that they wouldn't necessarily have tried had they had more time to rehearse. Again, the audio software available today makes creating a dialog track from many individual takes quite easy.
121
122
C H A P T E R 5 • Lip-Synching Real-World Projects
Modeling for Lip-Synching Modeling for lip-synching is the complement of getting a good dialog track: the cleaner and more interesting your model, the easier it will be to create interesting animation to go with your dialog track. For the most part, what goes for all modeling goes for the head and mouth of a character to be lipsynched: create a clean surface with regularly spaced isoparms or facets that avoid bunching up, especially in areas where a lot of activity will take place (such as the cheeks or mouth area; see Figure 5.1). In addition, consider how the virtual muscles under your character's face will pull the skin and lips to create sounds. This task may be easy if your character is extremely simple, but for most models, you'll need to think about how their mouths move. Most often people refer to human anatomy when creating mouth shapes, because most Figure 5.1: A cleanly modeled head, ready for rigging and animation animated talking creatures— human or not—move their lips like people do. Because most animals don't have the range of motion in their lips necessary for speech, anthropomorphizing their mouths is necessary for convincing mouth animation. The human mouth is made up of a number of muscles laid out In concentric rings around the lips, allowing us our large range of motion and expression In this area. The most straightforward way to model this muscle structure is to model the mouth area as a series of rlngs moving away from the lips. When you then pull on faces or control vertices, the mouth area behaves as if there were a more-or-less circular muscle structure beneath it.
Rigging for Lip-Synching Once your model is built, it's time to set it up for animation. You can rig mouth animation in a couple of ways: • You can manipulate the skeletal bones inside the mouth area. • You can directly manipulate the control vertices of the model's surface. Again, depending on the needs of your animation, you might want to use either or both of these methods. For simple, stylized, or "soft" characters, such as Monsieur Cinnamon that
Creating a Lip-Synched Animation
we'll work with later in this chapter, adjusting the surface of the mouth to get the appropriate shapes works well. However, real human mouths depend on a jawbone that rotates below the skull to achieve the major mouth positioning (the lip muscles fine-tuning this motion). As the jaw rotates—an effect difficult to simulate using blend shapes—rotating bones inside your character's mouth produce a more realistic gross motion for the mouth animation. Generally, the more realistic your creature, the more people will notice if only the surface of the model is moving. In such cases, using bones or, more often, a combination of bones and blend shapes is a better solution. Using bones to rig the mouth can be a simple or complex process, depending on the effect you're trying to achieve. In the simplest case, you can use one bone coming from the head joint, which acts as a jaw for the character's mouth, as in Figure 5.2. You must then skin this bone to the jaw area and weight it properly so that as the
Figure 5.2: A single jawbone extends from a head bone, allowing the jaw area of the head to rotate similarly to a real jaw. bone moves, the lower jaw area reacts accordingly.
Usually a smooth bind is best for skinning the jaw area. You then need to manipulate the weights so that the lower jaw is completely affected by the jawbone while the upper jaw is completely unaffected. Fading the influence of the lower jawbone in the lower cheek area produces a soft transition from the low jaw to the rest of the head.
In more complex cases, you can build an entire "muscular" system in spiderlike fashion out of bones surrounding the lip area. As each "leg" pulls on an area of the lip, the affected skin of the model is distorted along with the bone. Although this complex bone structure can create amazingly subtle effects, it is fairly complex to set up and use and, with the advent of blend shapes, is not used as frequently as it once was. To rig a character for blend shapes, you make multiple copies of the default mouth area—which is modeled in an expressionless neutral pose—and deform the vertices of the mouth to create various shapes, which become the target shapes that the neutral head will blend to when you animate the mouth. Rigging for blend shapes, then, is creating a library of mouth shapes that you can select, either by themselves or in combination, to create the final mouth shape at each moment of the final animation. For simple characters, the mouth library can be fairly small. Smile, frown, Ah (open), M (closed), and E (half open and stretched)
123
124
C H A P T E R 5 • Lip-Synching Real-World Projects
shapes might be all you need to create convincing lipsynching. For more realistic characters, your library of mouth shapes obviously increases to allow for the subtlety of the human mouth. A library of dozens of mouth shapes might be necessary for a realistic human character. As always, don't overrig your blend shapes. Create only the shapes you'll actually use; otherwise, you waste time both in rigging and in hunting for the correct blend shapes during animation. To create the actual blend shape node itself, follow these steps: Figure 5.3: The Blend Shape window 1. Select each of your target models, and then select your default model. 2. In the Animation menu, choose Deform Create Blend Shape. 3. Open the Blend Shape window (choose Window Animation Editors Blend Shape) to see your blend shape controls, as in Figure 5.3. To create mouth shapes, simply move one or more sliders up and down to create the shape you want for the current frame. Once you create your blend shapes, you will probably want to hide the target models in order to clean up your work area.
One useful aspect of the blend shape method in Maya is that you can use base blend shape targets to create "meta" blend shape targets. For example, say you want to create the mouth shape for whistling, and you already have target models for closed pursed lips and for the O sound. Rather than create a new target model from scratch for the whistle, you can use the Blend Shape window to combine the O and pursed mouths and then save this new shape as a blend shape target by clicking the Add button in the Blend Shape window. In Figure 5.4, the third blend shape (whistle) was created from a combination of O and pursed shapes given by the slider positions. Creating blend shapes from other blend shapes is a powerful time-saver durFigure 5.4: The Blend Shape window ing the rigging process. You can create five to ten basic shows the whistle mouth shape cremouth shapes and then produce the actual phonemes ated by combining the O and pursed (Ah, O, E, M, K, and so on) by combining these basic mouth shapes. mouth shapes.
Creating Vocal Clips Using the Trax Editor With your mouth shapes stored as blend shapes, you can load your dialog track and start keyframing your dialog right away. (To do so, move to a frame, move sliders to get an appropriate shape, and click the Key or Key All button to set keys on the blend shape for that
Creating a Lip-Synched Animation
frame.) Although this method may be fine if you only need to lip-synch a few seconds of speech, a much better method for longer stretches of dialog is to use Maya's character and Trax Editor features to create clips for various words and mouth shapes. To load sound Into your Maya scene, save your dialog track as a . w a v file, choose Flle Import, and browse to the sound file. Once the file is imported, you can see the shape of the sound file in the Timeline by RM clicking the Timeline and choosing Sound from the shortcut menu.
We will look at the Trax Editor more in detail in the first hands-on example later in this chapter, but the general method is to create a character (or subcharacter) for the mouth. The blend shape node is to be added to this character with the envelope and shape names selected in the Channel box and with the From Channel Box option selected in the Create Character options. This last step ensures that only these attributes will be keyed with the character, removing unnecessary keys during animation. Once the blend shape is a character, take the script and start creating the words the character speaks to form a library of character clips. Because the Trax Editor allows for scaling of words, you do not need, at this point, to match the words with any given timing; so in general you select a standard length (5 or 10 frames) and make all words last that long by default. During the actual synching process, you can adjust this timing, as well as the size of the mouth for the word, to fit the way the word is actually spoken. To create a clip, simply keyframe the series of mouth shapes for any given word or reaction, choose Animate Create Clip and give the clip the name of the word you just created, as shown in Figure 5.5. Once you animate an oft-used word for your character, you can save it as a clip and then load Figure 5.5: Creating a clip for the word it for future use. "goodbye" It is a good idea to create a neutral mouth shape and "breath" shape in addition to your words. You'll use the neutral mouth shape when the character is resting between sentences, and you'll use the "breath" shape when the character is getting ready to speak again. If you have lots of dialog or more than one character speaking in your piece, two distinct advantages will accrue from this method in addition to the ease of placing words where they need to be in a more intuitive manner (see the next section): • Any repeated words ("the" is often repeated multiple times, for example) have to be keyframed only once, because you can reuse the same source clip as many times as needed. • You can easily share a library of words between characters. As long as they have the same blend shape targets in the same order — even if the blend shapes look different
1 25
126
C H A P T E R 5 • Lip-Synching Real-World Projects
from one character to the other—you can transfer clips from one to the other, saving even more setup time. Thus, you can incrementally grow your word library (again, assuming all your characters have the same targets in the same order) over time on a single project or even multiple projects. For more information on transferring clips from one character to another, see Mastering Maya3, by Peter Lee and John L. Kundert-Gibbs (2001,Sybex, Inc.).
Creating the Completed Vocal Track The sound file is read into the computer, and your library of words is completed, so now it's time to lay out the actual words in synch with your sound. Use the middle mouse button to scrub the Timeline to hear your dialog track, choose where each word comes, and drag that word from the Visor window onto your Trax Editor window. Words are stored in the Visor under Animation: Character Clips and Poses: Clips.
When the words are basically in place, select each word (double-click it in the Trax Editor) and use the Channel box to adjust its length and scale so that it better fits the spoken word. If you decide a particular word is not good enough, you can open the word's keyframes in the Graph Editor (choose View Graph Anim Curves in the Trax Editor menu) and adjust them, as in Figure 5.6. After laying in, scaling, and weighting your words, add a blend between them if you like (select two clips and RM choose Blend Clips) and test your animation using a quick playblast! All the work on the front end creating character clips pays off here: the animation process itself is much faster than keyframing each word in place as you go. Figure 5.7 shows the Trax Editor and clips for Figure 5.6: Adjusting the animation curves for a word in the Graph Editor a famous sentence.
Figure 5.7: Creating a sentence using the Trax Editor
Creating a Lip-Synched Animation
In general, we have found that moving a character's lip-synched animation one or two frames before the actual sound for that mouth shape starts helps sell the reality of the synching. For some reason, matching sounds exactly to the mouth shape makes the visual mouth shapes seem to lag behind the audio track, which is disturbing. Because it's easy to move clips and fine-tune timing with the Trax Editor, experimenting with timing is a straightforward task.
Finishing the Lip-Synch Animation In the last stage of the animation process, basic technique gives way to art and storytelling. All the extra work put in here translates into more compelling and believable characters who sell their reality and get the audience to care about the story. Fine-Tuning and Adding Personality to the Vocal Track Although the basic lip-synch work is done now, there is always room for fine-tuning and improvement. Without this stage, the lip-synching may seem forced or mechanical, so here is where you get to add character to your model's speech. At this stage, you need to consider the following: • Anticipation and "overshoot" of words and sentences • Reaction to other characters' words and actions • The general personality of your character At the basic level, your character needs to anticipate what it will say next. In addition to body and facial movement (which are covered elsewhere in this book), you usually open your mouth before speaking in order to breathe in. The larger, louder, or more emotional the sentence or phrase to come, the bigger this anticipation should be. Likewise, after a sentence or a phrase, a character needs to regroup for the next set of words. Especially at the end of loud or emotional utterances, a character often overenunciates the last vowel or consonant, which adds emphasis to the words; the return to a neutral mouth position can take somewhat longer in these cases. Animating a character (or mouth) when it is not speaking can be particularly challenging, because you need to create an "internal monologue" for the character during these periods. If another character tells a joke, for example, does this character find it funny? Or really funny? Or was he thinking about the color of the sky at the time and missed the joke entirely? This information is available to the audience only through the animation of the character, and because most reactions are not scripted, you need to figure them out on your own. Although a big challenge, these "takes" are a great opportunity to be creative and put your own interpretation of the character into your animation work. Just be sure you don't upstage the speaking character with your work! Finally, determining the personality of your character has a bearing on how its mouth looks, and even on how you create your blend shapes in the first place. Is your character a villain in a melodrama? Then it probably sneers constantly, which distorts every word and mouth shape. Or maybe it's a happy character, in which case it will tend to have a more open and relaxed mouth. Or nervous, in which case it might tightly purse its lips when not speaking. One extremely valuable resource in this fine-tuning stage is the dialog track itself. The speech of good actors will give you much of the information you need about the character and its reactions. Even better, if you have a video recording of your actors speaking, you can
1 27
128
C H A P T E R 5 • Lip-Synching Real-World Projects
"steal" the way their mouth looks for any portion of the dialog, imbuing your animation work with just that much more realism and personality. Integrating the Lip-Synch into the Final Animation In a real production environment, the head of a character is usually animated concurrently with the body to save time. Thus, the head, with all the lip-synch work keyed onto it, will likely exist on its own and will need to be integrated into the body, which has been animated elsewhere. In most cases, there are two possibilities for placing the head back onto the character. The first is if no skeletal structure exists in the head, and the second is if one does. If no skeletal structure exists, your head will likely replace a dummy head that has been childed to at least a single bone, which has probably been keyframed for gross motion of the head. In such a case, simply delete the old head and make the new head the child of the head bone. It should then inherit the motion of this bone for large-scale motion, while maintaining the blend shape work you have done for lip-synching. If, on the other hand, the head skeleton has been inside your model all the time, you may have skinned the head to it (rather than simply parented the geometry to the skeleton) and may have created a lot of animation—even all the lip-synching—via these bones. In such a case, the body animators should have left the head area alone completely, and you should be able to parent the lowest bone in your chain to the neck or collarbone area of the body skeleton, which maintains all your skeletal and blend shape animation work for the entire head area. At this point, it is a good idea to create a playblast or quick render of your completed character to be sure your head and body actions match properly. If the mouth and head are extremely animated while the body is quiet, if the body consistently anticipates actions several frames before the head, or if any other obvious problems arise, you will need to sit down with the body animator and figure out a way to integrate your work to create a seamless whole. Now that we have covered the lip-synch process in the abstract, let's look at two hands-on examples that let you try your hand at lip-synching.
Hands-on Example 1: Lip-Synching Using Blend Shapes and the Trax Editor Lip-synching, if done well, compliments and blends into the overall animation. But if the timing is not quite right or if there are other visible problems, lip-synching becomes a distraction, lowering the quality of the animation. Blend shapes provide a convenient and versatile way to simplify the lip-synching process. And the Trax Editor helps automate the blend shape process by allowing you to store clips of words animated from the blend shapes for easy construction.
Preparing the Model for Lip-Synching Spending the extra time to properly set up the character helps to make the actual job of matching facial expressions and movements to sounds move quickly, allowing you to maintain your timeline. In this example, we'll use a humanoid character (without teeth or a tongue) to illustrate the basics of lip-synching using blend shapes, a word library, and the Trax Editor. Here is the process: 1. Prepare the character for blend shape creation.
Hands-on Example 1: Lip-Synching Using Blend Shapes and the Trax Editor
2. 3. 4. 5. 6. 7.
Create poses for expressions and sounds. Make the blend shapes. Import the sound. Create clips. Use the Trax Editor to line up sounds and motions. Preview the final scene.
Creating the Head (a General Overview) Monsieur Cinnamon (MC), the character we'll lip-synch, is a simple character with a complex personality. In one scene, he acts bubbly, exuding friendliness, and in another he is, hmmm, to put it nicely—a jerk. Keep this in mind as you create your character. The role the character plays in the animation should dictate the style of the lip-synching process. To prepare for lip-synching, imagine the mannerisms of your character. Are they confident, outspoken, shy, or bashful? A person portrays a great deal about their personality through mannerisms and their tone of voice. As a lip-synch animator, your job is to convey the character's personality through its facial movements. MC's temperament is one of extremes, smiling one second and in the middle of a mad tirade the next. The animation of the head must coincide with the body. For example, if MC's body language is exaggerated, his facial movements must also be overemphasized. Before you begin, open M C m a g n i f l q u e . m o v on the CD-ROM and watch MC speak the line we will be lip-synching. If you want to create the head shape, simply open a new file in Maya and start working. You can use this file to move through the steps of the example. Remember, as you create the head shape, MC should be able to look cute, cuddly, self-centered, or mean with the simple push of the blend shape sliders we will create. Now, follow these steps: 1. Create a NURBS sphere for the head shape. 2. Make sure the poles are approximately where you want the mouth opening. 3. Use a minimal number of sections when creating the sphere. (It is much easier to add detail later than fight with a heavy model.) 4. Select the CVs in the center of the sphere. 5. Pull the CVs inside the sphere, and arrange them to form the cavity inside the mouth. As sort of a cheat, you can create a sphere. Set the sweep angle to 180. Scale the half sphere so that it fits inside the head shape. Use a dark color to texture the surface in order to give the appearance of the inside of the mouth. Creating the hemisphere is quicker than shaping the interior CVs. 6. Use the Sculpt Surfaces tool and/or manipulate the CVs to build the facial structure: cheeks, eye sockets, chin, lips, and forehead. Remember, MC Is a simple character, so you don't need detailed facial features. After all, Monsieur Cinnamon is made of dough.
7. Name the sphere o r i g i n a l _ h e a d .
8. Create eyeballs and eyebrows.
129
130
C H A P T E R 5 • Lip-Synching Real-World Projects
Preparing the Character for Blend Shape Creation Depending on the complexity of your character, blend shape creation can be one of the most time-consuming portions of the lip-synching process. However, it is definitely worth investing the time up front because the blend shapes provide smooth, easily controlled transitions from one shape to another. Blend shapes are similar to morphing, in which one shape transforms into the other. Sliders control the functionality of the blend shapes. Spend the extra time as you create the forms to make sure that the mouth shapes look correct. No matter how skilled you are when it comes to matching the timing of the sound and animation, you will not have a successful lip-synching experience if your blend shapes are torn or lacking. You will find that blend shapes are easier to make if there is more detail in the mouth, cheek, and forehead areas. Since our character is fairly simple, we will only add CVs to the mouth area. The isoparms we will add will help give the appearance of underlying facial muscles. Follow these steps: 1. Switch to the Modeling menu. 2. The mouth area does need extra control, so add one or two isoparms around the mouth. 3. Select the face. RM click and choose Isoparms. 4. Click an isoparm near the mouth area, where you need more detail, and drag the yellow selection to the place where you want to insert the isoparm. 5. To add an isoparm, choose Edit NURBS Insert Isoparms choose Edit Reset Settings, and then click the Insert Button. 6. Repeat this process until you have enough detail to shape the mouth area in a variety of poses. 7. RM click the head shape. 8. Select an isoparm near the area where you want to insert an isoparm, and drag the yellow isoparm to its new location. 9. Repeat until you are satisfied with the level of detail around the mouth. (Be careful not to overdo this stage!) The CVs and hulls that result from adding isoparms may not be positioned exactly as expected, yet they will deform the shape in the area where you inserted the isoparm. Creating a Neutral Pose Your character needs the ability to go from happy to sad or from a relaxed pose to a scream. The easiest way to create the variety of poses is to start with a neutral expression (see Figure 5.8). The neutral pose will be the base shape of the blend shapes to come. Any changes you make to the neutral pose will be developed into a blend shape, showing the movement from the neutral expression to the new expression. You can combine each of the blend shapes to simply alter the new shape to form another pose. Make sure your model is clean and that the CVs are easy to manipulate. When modeling, keep the geometry light. Remember, you can add control points or vertices when you need them. If you have too many control points, it will be frustrating to manipulate the model. It is much easier to add points than to take them away without affecting the shape you are modeling. If you are ready to jump right into the lip-synching, open M c b l e n d s h a p e s . m b on the CD-ROM. This file contains the head and all the blend shapes. You can also create more blend shapes of your own. This example is a linear process, so you will build on each step that you complete. If you are going to complete the entire example, begin with the M c h e a d o n l y . m b file, and use this file throughout. You will be asked
Hands-on Example 1: Lip-Synching Using Blend Shapes and the Trax Editor
Figure 5.8: A model of Monsieur Cinnamon's head in a neutral pose
to open different files. In some cases, the file will show you the finished portion of the process. Use these files as examples and continue working in either the M c b l e n d s h a p e s . m b file or the M c h e a d o n l y . m b file. Don't forget to save your work along the way! Detaching Surfaces
When a person speaks, the mouth, nose, eyes, cheeks, and forehead all move in relationship to one another. To provide easier (and separate) control over these movements, we'll split the head into three sections. We'll then create blend shapes that utilize the separate areas. Once all the blend shapes are created, we'll put MC back together again! The blend shapes will be combined to move all areas of the face so that the animation doesn't look stilted. Open M c h e a d o n l y . m b on the CD-ROM, and look at how MC is naturally divided by his isoparms. For our purposes, we want section one to contain the mouth, chin, and cheeks. Section two will consist of the eyes and the forehead. Section three will be the back of the head. When slicing the character, consider which groups of facial movements occur together, lf you are working with a more realistic model, you divide the face differently than we are with MC.
To detach the surfaces, follow these steps: 1. RM click the head shape, and choose Isoparm from the pop-up menu. 2. Select the isoparm that falls midway on the nose (see Figure 5.9). 3. Choose Edit NURBS Detach Surface
131
132
C H A P T E R 5 • Lip-Synching Real-World Projects
Figure 5.9: The selected isoparm illustrates how the face will be cut into sections. 4. 5. 6. 7.
Figure 5.10: The isoparm signifies where the surface will be detached.
Clear the Keep Originals check box. Choose Detach. Deselect the object. Select the new surface and press 3 on the keyboard so that the mouth patch will display smoothly. 8. Rename the new surface m o u t h _ p a t c h . 9. Select the upper portion of the head. 10. Choose the isoparm, as shown in Figure 5.10. 11. Choose Edit NURBS Detach Surface. Press 3 to display the object smoothly. 12. Shift + select the detached surface, the eyelids, the eyes, and the eyebrows. Press Ctrl+g to group the objects together. 13. Rename this group to e y e _ f o r e h e a d _ p a t c h . 14. Select the back of the head, and rename it b a c k _ p a t c h . Figures 5.11, 5.12, and 5.13 show the three portions of MC's head.
Hands-on Example 1: Lip-Synching Using Blend Shapes and the Trax Editor
Figure 5.11 : MC's mouth, chin, and cheeks
Figure 5.12: MC's eyes and forehead
133
Figure 5.13: The back of MC's head
Creating Blend Shapes Now it is time for what you have been waiting for: blend shape creation! You will soon realize why taking the extra time to create the shapes to manipulate is so important. If your facial expressions are not done correctly, you will spend even more time focusing on the technicalities of expressing a certain sound. The purpose of blend shapes is to save time during the animation process. Blend shapes offer many advantages over keyframing CVs directly or using other techniques. Once the blend shapes are set up, you can quickly and easily move from one expression to another. You can also combine different shapes to gain extended use of the facial expressions you have created. You can even create new blend shapes from a combination of the blend shapes you have already prepared. To make full use of blend shapes, you must first understand the basic terminology. Blend shapes are a combination of a base shape and a target object. The base shape is the object that you want to manipulate. The target object is the shape of the object that you want to achieve. In MC's case, the three sections of the original head shape are base shapes. The duplicated copies you manipulate are the target shapes. A blend shape is the combination of the target shape and the base shape, the target shape being the look we are aiming to achieve. The base shape is the character's starting point, and the target shape shows the results of the deformations. Consider which portion of your mouth moves as you speak, and manipulate the CVs accordingly. For example, the upper lip does not move as much as the lower lip, so base the opening of the mouth more on the CVs of the lower lip rather than on the CVs of the upper lip. Blend shapes are a strong lip-synching tool. Keep In mind that they are also useful In a variety of other animation techniques.
Creating Blend Shapes for the Mouth Lip-synching is based on phonetics. Don't try to create a pose for every letter. You will surely drive yourself crazy and waste a lot of all-too-precious time. Instead, base lip-synching on major sounds in a word. You are trying to animate "the big picture" of each word. For
134
C H A P T E R 5 • Lip-Synching Real-World Projects
Figure 5.14: Basic blend shapes needed for lip-synching
example, if you are lip-synching the word "the," animate only "th." Figure 5.14 shows some helpful blend shapes. Depending on the complexity of your character, you may need more or fewer shapes. With these basic pointers in mind, you are ready to begin creating expressions for MC! Follow these steps: 1. Select the m o u t h _ p a t c h and press Ctrl+d to duplicate. Do not Instance a blend shape target object.
2. Rename the shape o p e n . 3. With o p e n selected, choose Edit
Delete by Type
History.
5 Each time you create a shape, move the new shape to a clean area of the workspace. Otherwise, all the shapes will be placed directly on top of one another. Also, do not freeze any of the transformations. If you do, the blend shapes will not work.
4. Press F8 to switch to Component mode. 5. Position the CVs to create an open mouth shape. 6. To open the mouth and pull the jaw down, select CVs in one of the following ways: • Shift+LM to select multiple CVs. • Ctrl+LM to deselect specific CVs. • Shift+Ctrl+LM adds to the selection, whereas Shift+LM toggles selected or unselected. • Pick walk from one CV to another using the arrow keys. • Click Select by Component Type: Hulls to see the relationship between the CVs. Manipulating the CVs takes patience, but it's time well spent, as you will soon realize. When you are pleased with the open mouth shape, it's time to create a blend shape. Creating Blend Shapes To create blend shapes, follow these steps: 1. Select the open shape.
Hands-on Example 1: Lip-Synching Using Blend Shapes and the Trax Editor
135
2. Choose Edit Delete by Type History. 3. Shift+select the base shape m o u t h _ p a t c h . The first shape you selected is the target object. The second Is the base shape. In general, all shapes up to the last one selected are target shapes. The last shape you select is the base shape. 4. 5. 6. 7. 8.
Switch to the Animation menu. Choose Window Animation Editors Blend Shape to open the Blend Shape window. From the drop-down menu, select Create Blend Shape. Double-click the blend shape in the Outliner, and rename the shape to m o u t h . Test the shape by moving the slider. The mouth should open and close.
Controlling the Blend Shapes You can control the order of deformation between the two shapes. You can also add target shapes, reset all values to zero, select the blend shape, and set keys on the shape. The blend shapes are controlled by sliders. Moving the slider upward results in a new pose. Returning the slider to the starting position returns the face to the original pose. You can use the sliders in combination to create a variety of shapes. This feature is wonderful because it allows you to reduce the number of blend shapes you need to make. Each blend shape contains Select, Key, and Reset buttons. If you are using multiple target shapes for a single blend shape, you can key, select, and reset all the sliders with a single button for each task. The envelope attribute of the Blendshape node allows you to control the exaggeration of each shape. Also if you set the envelope attribute to a negative number, the shape will provide opposing expressions. For example, if you create a smile shape using a negative envelope attribute, the result is a frown. The envelope attribute is found under the slider for each blend and target shape, as well as in the Channel box for the blend shape, and its default value is set to 1. Experiment with different values for each shape. You can turn a raised eyebrow into a scowling expression. The envelope attribute doubles the usefulness of your blend shape. Modeling a Second Blend Shape Using the Open Shape To create a target shape, follow these steps: 1. Open M c t a r g e t s h a p e s . m b on the CD-ROM. 2. Use the slider to open the mouth. 3. Select the shape, and press Ctrl+d to duplicate the modified base shape. 4. Rename the result to o h . 5. Move oh to an empty area in the workspace. 6. To delete the history of oh, choose Edit Delete by Type History. 7. Since the mouth is already open , it will be easier to create the oh form from this shape. 8. Move and scale the CVs until you have a shape similar to that shown in Figure 5.15. Figure 5.75: The "oh" blendshape
136
C H A P T E R 5 • Lip-Synching Real-World Projects
Creating a Second Blend Shape To create a second blend shape, follow these steps: 1. Select the oh shape. 2. Shift+select the m o u t h _ p a t c h , and in the Blend Shape window, choose Create Blend Shape. 3. Test the results by moving the slider. The mouth should move inward to form an O. Because you will be creating a variety of shapes, remember to rename your shapes so that you can keep track of the sliders when you are keyframing. Don't forget to delete the history after duplicating an object.
Adding a Target Shape Computer animation depends on the ability to automate tasks and ease the workload of the nonartistic portions of an animation. Adding target objects to a blend shape will greatly reduce the amount of time to set keyframes on a character. You can set keyframes for multiple sliders with the click of a single button, but you can also control the sliders individually. Experiment with the sliders in the M c t a r g e t . m b file on the CD-ROM. You will quickly understand the benefits of using target and blend shapes. It's fun to make a variety of expressions, and in a short time you can have a completely animated sentence or animation. Once the blend shapes are set up, the lip-synching process moves quickly from one word to another. Multiple target shapes can influence a single blend shape, allowing for centralized controls and a simplification of the character-creation process. You can add as many target shapes as you want to a single blend shape using the Add button in a blend shape. This way each of the targets can be used individually or in combination with the other targets to form new shapes. Creating target shapes improves the functionality of blend shapes. All the controls are centralized and easy to manipulate. When you create a target object, the base shape is baked and used as the new shape. The target shape must have the blend shape in the object history. You will have multiple sliders to control using a single blend shape. This is advantageous since you can key and reset all the sliders individually or as a group using the Key All or Reset All buttons. If you choose Deform Edit Blendshape Add Options to create a blend shape, a separate blend shape is created. This is not a target object. Each blend shape must be added to a character, keyed and reset individually. Each blend shape can be combined with the others, but you must key each blend shape that influences the shape. When you have a large number of target shapes, you might prefer to change the display of the Blend Shape editor. This menu is found in the Blend Shape window under Options. You can view the blend shapes either vertically or horizontally. To add a target shape, follow these steps: 1. Select the m o u t h _ p a t c h , and set the oh blend shape slider to 1.0 2. Click the Add button in the mouth blend shape. 3. Move the new head shape to an empty area of the workspace. If you are positive you will not need to edit this shape, you can delete it. To be on the safe side, you might want to create a file that contains all the shapes you used to create the blend shapes and have a separate file without all the blend shapes. 4. Notice that a new slider bar has been created in the mouth blend shape controls. Experiment using a combination of the two sliders to form different shapes.
Hands-on Example 1: Lip-Synching Using Blend Shapes and the Trax Editor
Figure 5.16: Target shapes allow for easier control over blend shapes. 5. Repeat the process of creating blend and target shapes for the mouth shapes suggested in Figure 5.16. When creating facial expressions, you might experience unexpected results if you add or remove CVs.
Adding Movement to the Eyes and Forehead The movements of the eyes and the forehead emphasize the character's emotions and mirror their mood. If the eyes and forehead don't move, the lip-synching will be boring. Yet if the movement is overdone, facial animation is not believable. As with all aspects of lip-synching, you act as an artist, brushing just the right amount of emphasis or restraint. As an artist, you need to be consistently observant of the people around you. Notice what small facial expressions and quirks really make someone's personality memorable. When you are lip-synching, you can re-create these small details in your character, which catapult the animation to a new level of greatness! When creating blend shapes from a group of objects, it is important that the target object and the base object be grouped in the same order and have the same group members.
137
138
C H A P T E R 5 • Lip-Synching Real-World Projects
Figure 5.17: Manipulating the CVs into poses
Again, we'll manipulate the CVs into different poses. You will want to create the shapes shown in Figure 5.17. Pose A shows surprise, excitement, and enthusiasm. Both eyebrows are raised, and there are wrinkles in the forehead. The eyes are slightly more open. Poses B and C show questioning emphasis and surprise by raising one eyebrow. Pose D illustrates anger and disgust. There are wrinkles on the bridge of the nose and forehead, and the eyebrows are slanted inward. Creating a Look of Happiness or Surprise To create a look of happiness, follow these steps: 1. 2. 3. 4. 5. 6.
In the Outliner, select the e y e _ f o r e h e a d patch. Press Ctrl+d to duplicate the objects. Rename the new group h a p p y . Move the group into a blank area of the workspace. Press F8 to switch to Component mode. Select the CVs closest to the nose on the left eyebrow, and move the selection upward, as shown in Figure 5.18. 7. Repeat step 1 for the right eyebrow.
Hands-on Example 1: Lip-Synching Using Blend Shapes and the Trax Editor
Figure 5.18: Select similar CVs to move the left eyebrow and forehead upward in surprise.
139
Figure 5.19: Select these CVs to make the eyes widen in surprise.
8. Move the CVs along the bridge of the nose and the center of the forehead forward and up to create wrinkles. When creating shapes, avoid moving the CVs on the outermost isoparm of the patch. Otherwise, you will have unexpected wrinkles in the blend shapes.
9. Select CVs on the top half of the eye and eyelid and move them upward, to widen the eyes, as shown in Figure 5.19. Be careful to choose only the CVs you want to move. It is easy to inadvertently select CVs
10. Manipulate the CVs until you are satisfied with the shape, and then choose Edit Delete by Type History. Creating a Blend Shape Using Groups Blend shapes are also useful in animating more than one object in unison. For example, the eyes and eyebrows are separate geometry that need to stretch and move according to the motion of the cheeks and forehead. You will want to make sure that the blend shape has exactly the same members as the target shape and that the two groups hold the objects in the same order. If the order is different, the blend shapes will not work properly. You will probably end up with a character that fits perfectly in a horror movie. Using groups allows you to keep a smooth relationship between the objects.
140
C H A P T E R 5 • Lip-Synching Real-World Projects
To create a blend shape using groups, follow these steps: 1. Press F8 to switch to Object mode. 2. In the Outliner, select the group h a p p y . 3. In the Outliner, Ctrl+select the e y e _ f o r e h e a d _ p a t c h . 4. Switch to the Animation menu. 5. From the main menu, choose Deform Blend Shape. 6. Rename the blend shape u p p e r _ f a c e . 7. Test the sliders to watch the face change from calm to happy. Change the envelope attribute to -0.50 and move the slider again. Notice that the eyebrows can now move down in anger or up in concern. The slider range below zero makes the eyebrows move downward, and the range greater than zero makes the eyebrows move upward. Repeat steps 1 through 7 to create other facial expressions. As with the mouth blend shape created earlier, use the u p p e r _ f a c e blend shape to hold multiple targets. Putting It All Back Together Even though we want separate control over portions of the face, the animation would be a little disturbing if we were to watch a talking head in three pieces, so we need to figure out how to put MC back together again. Test each slider to make sure you are satisfied with the results. It is easy to edit the blend shapes at this point. Once the separate surfaces of the head are reattached, you won't be able to create separate blend shapes for the eyes and mouth, but you can always duplicate and reshape the entire head to achieve the desired pose. Since the pieces of the face are working properly, the surfaces can be reattached. To put MC back together again, follow these steps: 1. Select the m o u t h _ p a t c h and the e y e _ f o r e h e a d _ p a t c h .
2. 3. 4. 5.
Press F8 to switch to Component mode. Set all components to Off. Click the Isoparm button. Select the two isoparms where the surfaces touch, as shown in Figure 5.20. (Ctrl+click to deselect unwanted isoparms.) 6. Choose Edit NURBS Attach Surfaces. 7. Repeat steps 1 through 6 using the e y e _ f o r e h e a d _ p a t c h and the b a c k _ p a t c h .
MC is back in one piece. Switch to Object mode, and select the head. The entire surface should appear green, as in Figure 5.21.
Controlling the Face as a Whole Now you can animate the upper and lower portions of the face, but wouldn't it be nice to place all the controls into one slider? For some emotions, such as anger or happiness, you can combine the sliders for different parts of the face, as shown in Figure 5.22.Continue working with the blend shapes in the file you have or use M c b l e n d e d . m b from the CD-ROM to complete this portion of the lip-synching process. The M c b l e n d e d . m b file contains all the blend shapes.
Hands-on Example 1: Lip-Synching Using Blend Shapes and the Trax Editor
Figure 5.20: Select these isoparms to reattach the surface.
141
Figure 5.21: The three sections of the head are reattached to form one shape.
Figure 5.22: A happy face and the sliders used to create the form
To combine the sliders, follow these steps: 1. Select the head shape, the eyes, the eyelids, and the eyebrows, and press Ctrl+g to group the objects together. Name the group head. 2. Move the sliders until you get the facial expression you want. 3. In the Outliner, select the head group and press Ctrl+d to duplicate the face. Move the head to a clean portion of the workspace. Name the group a l 1 .
142
C H A P T E R 5 • Lip-Synching Real-World Projects
4. Delete the history. 5. In the Outliner, select the groups all and h e a d . Now create a new blend shape. 6. Repeat steps 2 through 5 for other emotions, and then add the combined facial expressions as target objects.
Creating a Character A character allows you to key multiple objects as a single entity, which is exactly what we need to do when lip-synching. When assigning members to a character set, you determine which attributes from the objects can be keyed. Creating a character also allows you to create clips and reduce animation time through nonlinear animation techniques. Once you create a character you can add subcharacters to the character set. Characters let you animate quickly and efficiently while keying only the necessary attributes. Creating characters helps to organize the clips and lets you easily see the keys for each frame. Pay attention when setting your character. You will notice some crazy, not to mention strange, results if you forget to set the character. To create a character, follow these steps: 1. Be sure that nothing is selected and that you are in the Animation menu. 2. Choose Character Create Character Set 3. Name the character MC, and set Character Set attributes to From Channel Box. Once the character is created, you need to assign the attributes that can be keyed. 4. In the mouth Blend Shape window, click the Select button. 5. In the Channel box, highlight the shapes you want to include in the character, as shown in Figure 5.23. 6. From the main menu, choose Character Add to Character Set. 7. For each blend shape, follow steps 4 through 6, adding the characteristics you want to animate to the character set. Be sure to include all the shapes you need for lipsynching. Figure 5.23: Attributes to include in the character set
Creating a Lip-Synching Library This is the stage where all the planning and blend shape creation proves beneficial. You will discover that words in your script are often repeated. Creating a word library lets you animate a word one time and reuse the animation each time the word is spoken. Depending on your script, you will probably notice that portions of words are used repeatedly. Often, you can simply tweak a word you have already animated to create a new word. You can also transfer the library to other characters.
Hands-on Example 1: Lip-Synching Using Blend Shapes and the Trax Editor
Just as we created the blend shapes in sections, it is also easier to animate in sections. Concentrate on the mouth movement first. When you are pleased with your animation, add the eyes and forehead. Work in steps so that the task does not become overwhelming. We will have separate clips for the mouth (word) and eye (emotion) shapes. They can be arranged to coincide in the Timeline at a later point. Here are some guidelines to follow as you create the library: • Use a mirror and watch yourself speak. Notice how your mouth moves, the shapes it forms as you say specific sounds, and the amount of movement. • When you are lip-synching, be a minimalist. Listen to the words and determine the most prominent sounds. Animate the word using the shapes that make up the main sounds only. Watch your animation, and you will be surprised at the results. Your mind will fill in the remaining shapes. • Create playblasts to monitor the movement. You will soon discover that it is much easier to fill in gaps in the lip-synching than to remove unnecessary animation. The rate of speech influences how much detail you place on each word: the faster a character speaks, the less movement you need to give to an individual sound. The phonemes influence the character's movements before and after they are spoken. Consider the timing of each movement. If a character suddenly opens its mouth or pronounces a long vowel, you may find it necessary to tone down the movement. The mouth will appear as if it is randomly moving out of control if you do not allow enough frames for the motion to take place. Creating a library lets you use the Trax Editor to edit a clip as necessary each time the word is used.
Creating Word Clips The word library is composed of a multitude of clips. A clip will be created for each word used in the script. While lip-synching, you'll be surprised to see how quickly people speak and how few frames of animation each word receives. It's easy to overanimate lip-synching. Remember to lip-synch according to the phonetics of each word, and also remember that one word will need to blend into the next. We will create clips for each word based solely on shapes made while saying the words. Later in this example, we will scale the clips so that the action matches the timing of the voice. Our character, MC, aka Monsieur Cinnamon, is a French actor in a croissant commercial. Let's animate one of his lines: "Zay are magnifique!" The sentence is composed of three words, yet we are going to create four clips: one for each word, and another to use as a beginning and an end pose. The words influence one another, and you will achieve a smoother transition between clips if you have separate open and closed clips. Do not set keys for the opening and closing motions at the beginning or end of an individual word. The main sounds are long a for zay, r for are, and mag, na, feek for magnifique. We'll use the closed clips to position the mouth before the animation begins and blend the phonetic shapes with the closed shapes. Since the motion of talking occurs so quickly, the mouth does totally close between words. If you close the mouth in each of the clips you create,
143
144
C H A P T E R 5 • Lip-Synching Real-World Projects
the mouth will appear to be moving rapidly out of control. Even if the clip looks fine, when you scrub through the animation, it will look over done when the words are joined together. Keep in mind that 30 frames equals 1 second, and sometimes people speak more than one word per second. Minimalization is the key to successful lip-synching. A clip can consist of only one or two keyframes. For example, the words "zay" and "are" have only one keyframe that is scaled to last the length of the word. Each clip will begin at frame 1 and can be moved to a new position later . Follow these steps: 1. Open M C 1 a s t . m b from the CD-ROM to see a finished version of the clips and final animation. Open M c r e a d y t o t a l k . m b to begin adding clips, or you can continue working in your own file. 2. Set the character to MC. 3. Listen to the M c m a g n i f q u e . w a v file on the CD-ROM to hear the voice you will be lipsynching. Try to picture the shapes that you will need. 4. Open the Blend Shape window (choose Window Animation Editors Blend Shape. Be sure that every blend shape you want to use Is assigned to the character set before you begin setting keyframes.
5. Move to frame 1 and create the zay clip by moving the slider within the mouth blend shape (remember you can also use a combination of sliders) until you have a shape similar to that in Figure 5.24. 6. Click the Key All button on each of the blend shapes you moved to create the shape. This clip will only contain one keyframe. 7. Switch to the Animation menu. 8. Choose Animate Create Clip 9. Name the clip zay. 10. Click the Put Clip in Visor Only check box, and create the clip. 11. Create the "are" clip by following steps 1 through 10. 12. Name each of the clips for the word you are animating. Figure 5.25 shows the "r" sound that MC says next.
Figure 5.24: Combing the results of the ah slider with a little of the ee slider will form a long a shape.
Figure 5.25: Slider positions used to create an "r" shape
Hands-on Example 1: Lip-Synching Using Blend Shapes and the Trax Editor
13. Follow steps 4 through 12 to create keyframes for the word "magnifique," as shown in Figure 5.26. You will need four keyframes. Start at frame 1, and leave one to three spaces between each keyframe. You can leave more spaces if the position of the mouth is moving from a mostly open to closed position or vice versa. When setting keyframes for the words, allow one frame between consonant sounds and two or more frames on both sides of long vowel sounds. Later in this chapter, we'll scale the clips to match the length of each word. 14. Create a clip named c l o s e d , in which the mouth is closed. This clip should only be one keyframe. You might want to create a few different closed clips to add variety to the animation. For example, you could create a closed clip, a smile, a frown, and a barely open clip, as shown in Figure 5.27. Don't forget you can use the happy face blend shape you created earlier as a starting or stopping point for the sentence. Don't forget to make clips for the eyes and forehead!
Figure 5.26: Suggested shapes to form the word "magnifique'
145
146
C H A P T E R 5 • Lip-Synching Real-World Projects
Figure 5.27: Examples of different "closed" clips
Using the Trax Editor Whew! Now that the clips are created, we can begin to synch the clips with the sound file. The clips you created are stored in the Visor. You can open the visor from the main menu by choosing Windows General Editors Visor. Click the Clips tab to see the clips you have created. If the clips do not appear, be sure you have chosen the appropriate character. MM drag the clip from the Visor to the appropriate character in the Trax Editor. You can use the Trax Editor to organize and rearrange the clips you have created, which gives you maximum usage from each clip. Creating a Sentence from Character Clips The process of matching the mouth movement to the emphasized sound is accelerated because of the blend shapes you created. Organizing the clips is a simple way to quickly animate a large number of words. As your word library grows, you will gain a better understanding of the capability of each blend shape, increasing your lip-synching speed. This portion of lip-synching requires patience and an artistic eye. Playblasts are a quick way to see the results of your work. You will find that a good deal of tweaking may be necessary to make the lip-synching believable. Have fun with lip-synching and try not to become frustrated. To create a word from character clips, follow these steps: 1. Make sure the sound track is represented by green lines in the Timeline. Right-click the Timeline to select the name of the file you want to display. To Import sound, choose File Import and browse to the sound file you want to reference. In this case, choose M C m a g n i f i q u e . w a v from the CD-ROM.
Hands-on Example 1: Lip-Synching Using Blend Shapes and the Trax Editor
2. Place the following clips in the Trax Editor: closed mouth, zay, are, magnifique, and a second closed mouth. 3. Place the closed mouth clip four frames before the character should begin to speak. MM scrub over the Timeline to hear the sound track.This will help you coordinate the sound and the movements.
4. Place the zay clip on the frame with the first green sound line in the Time Slider. 5. Align the other clips so that they begin a frame or two before the matching word is spoken. Scaling Clips Scaling a clip is a great way to extend the use of your animation work. When words are repeated, they can be spoken at different rates of speed. If the character is excited, it might speak faster than normal; if the character is distracted, its speech patterns might slow down. Scaling a clip lets you use the same clip each time a word is spoken, even if the speed or emphasis of the word is different. To scale a clip, mouse over to the lower end of your clip. A straight arrow with a line appears, indicating that you can now drag to scale the clip (see Figure5.28). Cycling a clip lets you repeat an animation as many times as the script calls for. Cycling a clip is especially useful when someone is laughing. You can set the cycle to Relative to add a little variety to the motion, or you can set the cycle to Absolute to begin and end the clip exactly the same each time a cycle completes. To cycle a clip, mouse over the top end of your clip. A curved arrow will appear. Drag the clip to the desired ending position. A black tick mark denotes a complete cycle. You can scale and cycle a clip more than once, giving yourself a wider variety of results from the same piece of animation. To scale a clip, follow these steps: 1. Scale the clip until it matches the length of the word. You can scrub through the Timeline to hear when each sound is said and when words begin and end. This should be all the scaling you need to worry about for the words zay and are. 2. Right-click the magnifique clip and choose Activate Keys to return your keys to the Time Slider for editing. 3. The keyframes begin with frame 2. To simplify the scaling process, Shift+click frame 1 in the Time Slider. Drag the mouse until a red bar covers the last key. (Keyframes are denoted by red bars in the Time Slider.) 4. Click the middle two arrows in the red bar, and drag the keys to the frame before MC begins to say "magnifique." Now you can continue to edit the keys in the Timeline so that the movements are scaled to match the sounds. 5. Scrub through the Timeline and listen to magnifique. Shift+click and highlight the keys you need to position, and move them to the appropriate position in the Time Slider. Match the placement of the keys in the Timeline (or Graph Editor) with the emphasized portions of the words.
147
148
C H A P T E R 5 • Lip-Synching Real-World Projects
6. Return to the Trax Editor window and right-click the magnifique clip. Choose Activate Keys to remove your keys from the Time Slider, which allows you to use the magnifique clip. In this way, the shapes are formed as the words are spoken. Once you are familiar with scaling keys, this portion of the process will move quickly. Since the sentence is spoken without a pause, blending the clips provides a smooth transition between words. Notice that the mouth does not close entirely after every word. Use the closed mouth shapes at the beginning or end of a phrase. If there is a long pause, consider whether the character is happy or sad and what needs to be portrayed during a pause. You might want to blend the nonverbal clips with the word clips. 7. Select the zay clip and Shift+select the are clip. 8. In the Trax Editor, choose Create Blend. 9. Create blends between the remaining clips. Scrub through the animation. If the character's movements are overemphasized, make the correction in the Trax Editor. The weight attribute determines how much movement appears in the animation. To reduce the effect of a blend, lower the weight. If you want to exaggerate the motions, increase the weight. The script might call for the same word to be whispered in one scene and screamed in another, or perhaps sometimes the words are just overexaggerated. The weight of the clip influences the extent of action in each clip. Select the clip, and in the Channel box, set the weight to 0.2. Now, scrub through the animation. Change the weight back to 1, and notice the results. The weight attribute is useful for conveying emphasis and emotion. When you play back the animation, you will discover that you need to reduce the weight of each clip. Experiment with values from zero to 1 until you are happy with the results.
Testing the Lip-Synching: It is finally time to see the results of your hard work! Create a playblast and let the talking begin! If MC's movements appear to lag a frame or so behind the sound, you can offset the sound track in the Timeline. Follow these steps: 1. Right-click the Timeline. 2. Choose Sound name of your track 3. In the Audio Attributes options, set the offset to -2. Alternatively, you can drag+select all your Trax clips and move them two frames or so to the right in the Trax Editor.
Tweak the clips until you are pleased with the lip-synching, as shown in Figure 5.29 (also see a rendered sequence of MC speaking in the file M C m a g n i f i q u e . m o v on the CD-ROM). If the mouth looks as if it is moving wildly and is not in synch with the sound, you probably tried to create too many sounds for a word, or you need to reduce the weight assigned to the clip. Remember to lip-synch phonetically, and not according to the actual letters. Now that you are pleased with the words, add some movement to the upper portion of the face. You might want to set the character to MC and simply create one clip of movement that covers the entire sentence, by setting keyframes throughout the Timeline each time you want movement. Again, do not overdo the motions of the forehead and the eyes. Say the
Hands-on Example 2: Photo-Real Facial Replacement and Lip-Synching
Figure 5.29: MC in action!
sentence while looking in a mirror. Notice when you move the upper portion of your face, and re-create those movements using the blend shapes you created. Don't forget to add movement to the character's head. Remember, a person's head is not still as they speak! Keep in mind that lip-synching requires an immense amount of patience and attention to details. The minor quirks or subtle movements in an animation separate the good characters from the great ones.
Hands-on Example 2: Photo-Real Facial Replacement and Lip-Synching In stop-motion animation, lip-synch is essentially achieved through "head replacement" techniques. In these cases, the entire head is removed from the armature puppet or molded model and replaced with an exact duplicate head that has a different mouth position. In CG animation, the concept remains much the same. We typically go about lip-synching by replacing the head much the same way by using blend shapes, a.k.a. morph targets. At each frame, the digital model is seamlessly swapped out for another with a different mouth shape. The actual process of 3D lip-synching using blend shapes doesn't remove one head in favor of another. In actuality, it deforms the model's head or mouth area to fit the shape of another model of the head with that mouth position in mind. Still, it is much simpler, yet still fairly accurate, to also refer to this technique as "head replacement."
149
150
CHAPTER 5
Lip-Synching Real-World Projects
Defining the Task In more and more effects shots, whether for film, television, or the game market, the digital animator is asked to create a talking creature or person—such as a baby—from a live action shoot, to augment the reality already captured on film or tape. When dealing with an otherwise real person or animal, it is a waste of time and energy to create a full head to match the live action subject, and a more direct replacement technique is required: mouth replacement. The purpose of this example is to replace a mouth on a loyal and patient pet cat, Rox. With this type of lip-synch, we are going to replace only a part of the talking animal or person (or any other object for that matter), as opposed to setting up an entire head and rigging only the mouth to talk. Since the subject is already shot, in this case a digital still photo, and we are making a photo-real animation, it becomes incumbent on us to use as much of that information as possible and minimize the amount of CG inserted into the scene. In this example, we'll start with a rudimentary snout for the cat and use the live action still image as a direct texture for it. It makes no sense to try to work umpteen different shaders and lights to try to match the live action background when we can just use the real photo. Once we accomplish that, we'll build several replacement blend shape snouts to mimic mouth movements and, along with a simple joint structure for jaw movement and an internal mouth structure, create a deformation chain that includes clusters to make the cat talk.
Modeling Rox's Snout First, we should take a look at our live action plate(s). In this case, we'll be using R o x . t i f (in the s o u r c e i m a g e s folder of the R o x _ l i p _ s y n c project on the CD-ROM) as the BG plate. This image is at NTSC standard D1 resolution, which is 720 by 486. To match your render output to this resolution, open the Render Globals and select CCIR 601/Quantel NTSC in the Resolution Presets. Once you change your resolution, open you camera attributes editor. Change the Film Gate to 35mm TV Projection. Load the image into your Perspective window (persp) by choosing View Image Plane Import Image in the Perspective window. In most cases, it should come in fitted slightly off your resolution gate. Open the image plane attributes, set the Fit to To Size, and click Fit to Film Gate. You should now have the image perfectly fitted to your Perspective window's resolution and film gate. What you see in this window is what will render. In the interest of brevity, we'll condense the modeling portion of this exercise to only say that we'll begin with a NURBS shape that was lofted together using profile curves loosely matching the profile of the cat's snout. With this sort of mouth replacement, it is not at all necessary to create a dead-on model of the cat's snout. We're aiming for something that fits over and gets the general shape and proportions of the snout. To achieve that, we'll take the model, match the camera angles as best as we can, and then eventually tweak the geometry to conform to the outlines of the cat's snout. But we're getting ahead here; let's get back to the basic snout model and the gums and teeth. The only serious point to remember in creating the model is to pay close attention to the overall outside shape; try to match the outside edges of the model to the subject. Of course, if the subject is moving and exposing multiple angles of itself, your model will have to be more accurate, since the snout needs to be precisely motion tracked to the live action subject.
Hands-on Example 2: Photo-Real Facial Replacement and Lip-Synching
1 51
For the most successful motion track, track the length of the shot in segments of movement broken into directions. For example, a segment might start at a slight tilt of the head up and end when the head stops moving up or changes direction while moving up.
Setting Up for Animation Now, for jaw movements and the ability to easily position and track the snout to our subject, we'll insert a short joint chain leading from the base or the back of the snout, to the jaw hinge, where we'll fork into the lower and upper jaws. Figure 5.30 shows the result we're after. You can use R o x _ S n o u t _ M o d e l . m b in the s c e n e s folder of the R o x _ l i p _ s y n c project on the CD-ROM as a starting point. This file includes a basic snout model. It also contains the original curves on which the model was lofted to give you a good idea of how it was created. You can use this model to build your skeleton and the inside of the mouth. Depending on your subject and the angles of view, you might want to include a tongue inside the mouth. For this, insert an extra joint between the jaw hinge and the lower jaw joint to be the attach point for the tongue. Don't worry about attaching a tongue and its subsequent joint structure to the lower jaw by way of this interim joint until you've bound the snout to its bones and painted its weights appropriately. In this exercise, we won't need a tongue. You'll also need to build the inside of the mouth, unless you're trying to lip-synch telepathy. To do this, instead of creating a complicated snout surface that wraps inside to make the gums and so forth, we'll create a simple row of lower jaw gums. Inset into those gums are the basic teeth. These will all be grouped under the lower jaw bone, as opposed to being skinned to them, as shown in Figure 5.31. The file R o x _ S n o u t _ M o d e l _ W i t h _ B o n e s . m b in the scenes folder of the R o x _ l i p _ s y n c project on the CD-ROM will outfit you with the snout model with the gums and teeth and with the skeletal chain in position, but unbound. Simply load this file to group your teeth, bind your skeletal system to the snout, and paint their weights, or continue with your own model.
Figure 5.30: A simple joint structure to animate the jaw movements
Figure 5.31: Group the gums and teeth under the joint system, but don't bind them.
152
C H A P T E R 5 • Lip-Synching Real-World Projects
The only thing that remains is to attach the gum geometry to the inside of the lips. By selecting isoparms on the outside edge of the gums and the inside edge of the lips, we can choose Edit Curves Duplicate Surface Curve with history turned on to make lofts that extend between the two geometries that will deform and fit as the mouth shapes change, as shown in Figure 5.32. Now it's time to bind the geometry to the bones. If you have already grouped the gums and teeth under the joints, go ahead and temporarily ungroup them from their joints. Once we bind the snout to the joint system, we'll regroup the gums and teeth back to their respective joints so that they move properly when the jaw is rotated. We're doing this because it makes no sense to skin them to the joints Figure 5.32: Loft surfaces between the gums and the along with the rest of the snout. model's lips, and keep the history. Select the snout, select the root joint, and then choose Bind Skin Smooth Bind. This attaches our geometry to the joint structure fairly well, but we need to paint some weights to make it perfect. In Shaded mode (click 5 in the persp window), select the snout geometry, and choose Skin Edit Smooth Skin Paint Skin Weights Tool (see Figure 5.33). Choose a round feathered brush, and set a good upper limit radius for your brush shape. In the Influence section of the Tool Settings window, select the very bottom joint of the lower jaw and make sure that the entire snout geometry is painted black. You want no influence coming from the bottom level joint. Do the same for the bottom joint of the upper jaw. You RM click to select the joint in the modeling view to select it for painting, but you can't use the left mouse button.
In this example, paint the weights so that only the lips and front snout move when the jaw joints are rotated. You don't want
Figure 5.33: The Paint Weights tool
Hands-on Example 2: Photo-Real Facial Replacement and Lip-Synching
any unnecessary movement toward the back of the snout, as that will have to blend into the live action plate. On more thorough projects when more than the snout is being replaced, however, it would be wise to paint some weight farther down the jaw to better mimic real movement in muscle and skin. Because this exercise only calls for the jaw joints, and subsequently the painted weights, to make general jaw movement, the weights painting should be relatively straightforward. Figure 5.34 shows the painted weights of the four major bones in the skeletal chain. Notice how only three joints truly control the model: root joint, upper jaw joint, and lower jaw joint. When you've painted all the weights, grab a frosty beverage, put your feet up, rest your wrist, and check out what's cooking on TV. A relaxed animator is a happy animator.
Texturing Setup At this point, before we set up our shading, we're going to duplicate (without history) the snout geometry once. Make sure your joints are at the bind pose, and if not, select the root and choose Skin Go to Bind Pose.
Figure 5.34: View of the four major bones and their painted weights.
1 53
154
C H A P T E R 5 m Lip-Synching Real-World Projects
if you are not at the bind pose before you duplicate your snout for the blend shapes, your blend shapes will not work later in the procedure. It is vital to be at the bind pose before you copy the snout. Otherwise, the deformation chains will be out of order, and your blend shapes will double transform when you animate the blends.
Select the snout geometry and duplicate it (again, without input connections turned on or upstream connections). This will be the base for your blend shapes. Keep it out of the way, hidden or templated, for now. Later we will duplicate that head 11 times and lay out all the heads in a nice grid. Those will be our blend shapes. The file R o x _ F i n a l _ R e a d y _ f o r _ T e x t u r e . m b in the s c e n e s folder of the R o x _ l i p _ s y n c project will catch you up to this point. It has the snout model, inner mouth, bound skeleton with painted weights, and one copy of the blend shapes ready for you. Just add texture, one cup of boiling water, and stir! Creating a Camera Projection Shader To create a texture for the snout, we'll use the background plates as camera projections based right off our renderable camera. Follow these steps: 1. 2. 3. 4. 5.
In the Multilister, or Hypershade if you're a weirdo, create a new Surface Shader node. For its Out Color, click the Map button. Turn on As Projection and select File. On the projection node of this new shader, set the Proj Type to Perspective. Under Camera Projection Attributes, select perspShape node.
For our file node's Image Name attribute, we'll use the live action plate(s) that we're also using as an image plane. Click the Browse File button, and dig out that file or sequence of files and apply it. For this, we'll use R o x . t i f from the CD-ROM and not worry about an image sequence.
Setting Up the Texture Reference Object Select the snout geometry you've created (or the snout geometry named m o d e l in the file R o x _ F i n a l _ R e a d y _ f o r _ T e x t u r e . m b in the s c e n e s folder of the R o x _ l i p _ s y n c project on the
Hands-on Example 2: Photo-Real Facial Replacement and Lip-Synching
1 55
CD) and assign your shader to it. If we were to leave it there, any deformations on this object would make the texture swim on the object, and the effect would not work. We need to make the projected texture "stick" to the object. For that, we'll create a texture reference object, which will make another copy of the snout that Maya will use to reference the texture information to map to the actual rendered snout. Select the snout, and in the Rendering menu, choose Texturing Create Texture Reference Object to make a templated copy of the snout at the exact position of the snout. Now, duplicate the joint structure (preferably without the tongue joints attached if you decided to create them). We'll be attaching that copy to the templated texture reference object with a smooth bind. Now select the renderable snout, select the texture reference object, and in the Animation menu, choose Skin Edit Smooth Skin Copy Skin Weights to duplicate the same skin weights from one snout to the other. We want both snouts to move precisely together when we line them up with the real cat. And indeed, when we track the built snout to a live moving subject. So, to summarize, we'll position and track both the renderable snout and the texture reference object at the same time by manipulating both skeletons. Once we position the snout and the texture reference object using both joints to match the picture of my cat in your persp window (see Figure 5.35 in the next section), we'll be ready to get on to finishing the lip-synch setup. If the model is not lining up precisely with the view, don't fret; we'll need to do the lining up on a component basis. Finessing the Fit Once you have the snout in position, you'll need to fit the snout precisely to the real cat, and for that you will need to select the CVs of both the renderable snout and texture reference object in the areas that need to be nipped and tucked and make clusters of them. Only then move them to fit the cat (in the Animation menu, choose Deform Create Cluster). By using clusters, you're putting these deformations at the end of the deformation chain, allowing the other deformations (the IK and blend shapes) to happen first. Tweaking geometry like this without the benefit of these clusters will produce undesirable results, and it will make Maya angry. You don't want to see Maya angry. See Figure 5.35. If the lip-synch were to be to a sequence as opposed to a still image (like the one for this example), you would need to 3D match move/camera track to the footage. In that case, you need to track both skeletal chains (and hence the models, since they are bound to the chains) to the background image. Figure 5.35: Using clusters to fit the models to the shape of Rox's snout
156
C H A P T E R 5 • Lip-Synching Real-World Projects
You have to make sure that both the renderable skeleton chain and the reference object skeleton chain are tracking precisely together. If one is off, the texture in your final render will slip and swim. If you motion track using only one set of joints, make sure you copy the animation from the Render chain to the reference object chain or vice versa. Otherwise, just be sure to select both joints at once when making your rotations and movements in your tracking. Selecting both joints simultaneously should be fairly easy since the joints are right on top of each other. Just use a marquee selection (drag the mouse over the joints to select them both) to grab both. It seems like a good idea to use a constraint or an expression to make the joints have the same motion and orientation automatically, but you would not want this. Once you have the snout positioned and/or tracked to the background, you'll want to have some measure of independent movement in the render joints, most typically in the lower and upper jaw joints. Having them tied to the reference joints will disallow any independent movement for your animation, and you will be unable to move the upper or lower jaw joints of the render model to lip-synch. We have two skeleton chains so that we can track both the reference object to the moving texture and the renderable snout to the BG plate of the cat's head. We don't simply assign both the renderable and the reference texture object to the same chain because we need the ability to manipulate the renderable snout's joints differently. We will need to animate some of the joints of the renderable snout independently of the reference object snout. For example, we'll need to move our renderable object's jaws to animate to the lip-synch, while leaving our texture reference object's jaws unmoved, if both the renderable snout and its texture reference object were bound to the same skeletal chain, this would be impossible.
As long as the texture reference is tracked properly, the textures will not swim on the renderable object. As long as the track (except for the jaws' movement caused by talking) for the renderable object is also accurate, the third track will be spot on as well, creating a seamless scene in the final comp. See Figure 5.36. The track might involve more than simply rotating the joints to keep the snout in position. It might also involve animating the clusters you've created to tweak the geometry into place.
Creating the Blend Shapes After you conform your model to the start position of the subject or track it to the length of the clip, you're ready to model some morph targets. Hold on to your socks, and call your mom and tell her you love her. Moving CVs around on eight models can be utterly annoying. Let's get back to the duplicated snout. For this scene, we'll need about 8 different mouth shapes. But let's make 12 blend shapes to fit to right now. Duplicate the blank snout 11 times, and arrange the duplicates nicely in a grid, out of the way of the main Perspective view, as in Figure 5.37. It's fairly easy to add more blend shapes into the scene when you need them, but it's not a bad idea to insert a few "blanks" right now for future use, just in
Hands-on Example 2: Photo-Real Facial Replacement and Lip-Synching
1 57
Figure 5.36: Both models conformed to fit Rox with clusters case. If a new mouth position is needed, you can pull up one of the blanks, make the new shape, and, presto, it's already in the blend shapes. Figure 5.37: The grid of different mouths for the With your 12 copies, make 8 different mouth blend shapes shapes for the major vowels and consonants. Make sure you hit two different O shapes and one A, E, U, M, T, and FAV. That should cover you fairly well, though you're more than welcome to make more mouth shapes according to the audio you have to synch to. When you're working with your blend shape objects, be sure not to freeze transforms on them. That will mess up the deformation order and give undesirable results. You can adjust the blank blend shape objects fairly easily by moving the proper CVs on the models. You can also use deformers such as a lattice, but CVs are preferable because they are straightforward for this model and easy to manipulate. For example, to make an O shape, grab the first couple rows of CVs in the middle of the top lip, and move them up. Similarly, grab the first few rows of CVs in the middle of the lower lip, and move them down. It's advisable to create more than one O shape to add variety. We've set up the blend shapes for the file R o x _ F i n a l _ N o _ A n i m . m b in the s c e n e s folder of the R o x _ l i p _ s y n c project on the CD-ROM with two blank O shapes for your modeling use. A U shape is essentially the same as an O, but more sharply curved than an O. For an A shape, grab the first few rows of the upper lip CVs, and move them up little bit. Grab the middle ones, and move them slightly higher. Grab the first few rows of the bottom lip CVs, move them down a bit, and scale them out horizontally slightly. For an E shape, grab the first few rows of CVs on the bottom and top lips, and scale them away from each other a little bit, and scale them out horizontally a little bit. Scaling the group of CVs up and out like this will separate the lips evenly and also elongate the lips slightly horizontally, as if you were stretching your lips to say "cheese." An M shape is essentially like an E shape, but scaled down and in as opposed to scaled up and out. This will close the mouth and slightly purse the lips.
158
C H A P T E R 5 • Lip-Synching Real-World Projects
An F or V shape starts with an E shape. Grab the CVs of the first rows of the bottom lip, and move them up toward the upper lip. A T shape is also like an E shape, but it only involves moving and scaling out the top lip CVs to bring the upper lip slightly higher. Grab the first few CV rows of the edges of the upper lip and bring them slightly higher than the middle, making a bit of a smile. See Figure 5.37 for an example of mouth shapes. The file R o x _ F i n a l _ N o _ A n i m . m b in the scenes folder of the R o x _ l i p _ s y n c project on the CD-ROM will provide you with a fully built and bound snout and skeletal chain that has already been conformed to fit the background plate. The blend shapes are provided and set up, though the task of actually building the mouth shapes is left up to you. It is important to make your own mouth shapes for animation, as it will give you more control over the lip-synch. Be sure not to delete or add any CVs or isoparms to any of the blend shapes. All the blend shapes need to be uniform with the original renderable object in that respect.
As a matter of habit, name all your blend shape snouts according to their letter sound. Select the blend shapes in the order that you would like them to appear in the Blend Shape Editor, select the renderable snout, and choose Deform Create Blend Shape In the Create Blend Shape Options dialog box, make sure that In-Between and Delete Targets are turned off and that Check Topology is on, as in Figure 5.38. By leaving the In-Between check box cleared, we're setting up the Blend Shape Editor to display a separate slider for each mouth shape. In this way, we can combine different mouth shapes for even greater flexibility. And, of course, leaving the Delete Targets check box cleared will keep the original blend shape objects in the scene, in case you need to adjust the mouth shapes later, which you can do. Check Topology to make sure that the renderable snout object and the blend shape objects have the same number of isoparms and CVs. Click the Advanced tab (see Figure 5.39), and switch the blend shape type from Default to Parallel. Switching the blend shape type is important. If you don't do so, the renderable head will blend right off the joint system, forcing one of us to loudly snarl at you, swing our arms around violently, and throw our shoes at you. Nobody wants that to happen. By selecting Parallel, you allow the deformations on this object (namely the joint system, the blend shapes, and any clusters) to behave nicely toward one another. Otherwise, it's a World Wrestling Federation free-for-all. Setting the deformations to Parallel will set up the deformation order so that the blend shape deformations will occur in parallel or in tandem with the other deformations on the object, namely the skeletal deformations we've set up for the snout. Not setting the deformations to Parallel will create strange results when there is animation on the skeleton and the blend shapes. Once your blend shapes are set up, you're in business. Test one of the sliders in the Windows Animation Editors Blend Shapes, and make sure the snout doesn't fly off the joints. If it does, delete the blend shape and try it again, checking your settings. The most common issue will be one of two things. First, it might be that Maya hates you and will do everything it can to make sure your lip-synch doesn't come out right, or second, it could be that your joint chain was not in the bind pose when you duplicated you original renderable
Hands-on Example 2: Photo-Real Facial Replacement and Lip-Synching
Figure 5.38: The Create Blend Shape Options dialog box
Figure 5.39: On the Advanced Tab, be sure to select Parallel Deformation Order.
snout for the initial blend shape. This error is fairly common, but is no less annoying because it basically means you have to go back and redo all that again. At this point, clean up your scene file. Make sure it's all named properly. Make a few display layers and assign the reference object snout and its joints to one layer. Assign all the mouth and tongue stuff into another layer. Assign the blend shapes to a layer that's hidden. Assign the renderable snout and its joints to another layer, and so on. Be organized. The cleaner and easier the file is to work with, the better the result and less likely you'll need a stiff drink after the project. Animating the Lip-Synch This is it now; this is where we separate the men from the boys, the women from the girls. Lipsynch animation can be tricky. You need to keep a few elements in mind when lip-synching a character, the most important of which is that the animation doesn't start and stop with the lips moving in time with the words. First, it is important to gauge properly the intention of the talking animal. This may sound weird, but it's true. "What's my motivation?" whines an actor on set to his director. It's a cliche because it's true. Knowing what your lip-synch subject is saying is only part of the battle. You also need to understand why and how it's talking and what it's saying. These details may seem insignificant, but small details make up 90 percent of believability in an animation. The best way to track down motivation is to repeatedly listen to the track to that you're lip-synching. If possible, see the shot in the context of the entire piece to better understand what is happening before and after your shot. Listen to the tonal variations in the audio. Listen to the voice's cadence. Isolate lilts and other diminutive nuances that will help better flavor your lip-synch. Keep in mind that the purpose of this example is strictly lip-synch. Myriad facial movements accompany most well-done lip-synch animations. You can't move your mouth and make verbal gestures without moving the rest of your face, especially when emoting. Most commonly, 2D warping software is used to augment the type of lip-synch we're implementing here. However, because of space constraints, we'll leave out that facet of this animation. Ok, let's get to it!
1 59
160
C H A P T E R 5 • Lip-Synching Real-World Projects
Figure 5.40: The audio waveform is displayed in the Time Slider. "Do You Understand the Words That Are Coming Out of My Mouth?" Load R o x _ A u d i o . w a v from the CD-ROM into your media player and listen to it. You will be lip-synching about 30 seconds worth of audio. That's a lot of audio for one shot, but this example exposes you to lip-synch using only one setup. When you're comfortable with what the voice is saying and you have a good feel for how she is saying it, load the file into your Maya setup. Now, follow these steps: 1. Set the frame rate at 30fps. 2. Choose File Import. 3. Locate your audio file. It's better to copy the file onto your hard drive, into your s o u n d s directory in the current project if it's on a CD or removable drive.This step imports the file into the scene. 4. RM click the Time Slider, and choose Sound audio filename from the shortcut menu to display a cyan-colored waveform superimposed on your Time Slider, as shown in. Figure 5.40. If the waveform doesn't display, choose Windows Settings/Preferences Preferences. In the Sound section, set Waveform Display to All, as shown in Figure 5.41. 5. RM click the Time Slider again, and choose Sound Rox_Audio to display the Attribute Editor for the sound. 6. Set the offset to 1 instead of 0 to start the audio on frame 1 rather than frame 0. (Honestly, starting at frame 0 just gives me the willies.) 7. RM click the Time Slider (boy, we're sure doing enough of that lately!), and choose Set Range To Sound Length to set the Time Slider range to the exact length of your audio, which should now read frame 1 to 822. 8. Use the range slider to zoom in to a more manageable time range, such as 30-50 frames at a time that correspond to what the voice is saying. 9. RM click the Time Slider again, and choose Playblast (or use your hotkey or the menu selection). Be sure that the playblast will play in Movieplayer and not in fcheck. Windows Media Player or the QuickTime player will play the audio along with the visual playback while fcheck will not. To change from fcheck to Movieplayer, RM the Timeline and choose Playblast Options. Click the Movieplayer radio button. Now notice the audio in the context of your frame range. Once you playblast the scene or even click the Play button, the audio loads into memory, which makes scrubbing the audio back and forth rather speedy. Again, the file R o x _ F i n a l _ N o _ A n i m . m b in the s c e n e s folder of the R o x _ l i p _ s y n c project on the CD-ROM will bring you up to speed. All this file needs are the mouth shapes modeled into the blend shape objects already laid out for you and the final animation. Overall Mouth Movements Grab the slider in the Timeline and scrub the audio to get a sense of the gross or overall movement of Rox's jaw. Grab the lower jawbone and begin rotating it up and down to match the audio. Don't rotate too far down; a little goes a long way. This first pass is to time the gross mouth movement properly. For a slightly more animated or cartoon feel, place some rotations in the upper jawbone as well. Again, a little goes a long way, especially with the upper snout movement. These rotations accentuate the lip-synch overall.
160
C H A P T E R 5 • Lip-Synching Real-World Projects
Figure 5.40: The audio waveform is displayed in the Time Slider. "Do You Understand the Words That Are Coming Out of My Mouth?" Load R o x _ A u d i o . w a v from the CD-ROM into your media player and listen to it. You will be lip-synching about 30 seconds worth of audio. That's a lot of audio for one shot, but this example exposes you to lip-synch using only one setup. When you're comfortable with what the voice is saying and you have a good feel for how she is saying it, load the file into your Maya setup. Now, follow these steps: 1. Set the frame rate at 30fps. 2. Choose File Import. 3. Locate your audio file. It's better to copy the file onto your hard drive, into your s o u n d s directory in the current project if it's on a CD or removable drive.This step imports the file into the scene. 4. RM click the Time Slider, and choose Sound audio filename from the shortcut menu to display a cyan-colored waveform superimposed on your Time Slider, as shown in. Figure 5.40. If the waveform doesn't display, choose Windows Settings/Preferences Preferences. In the Sound section, set Waveform Display to All, as shown in Figure 5.41. 5. RM click the Time Slider again, and choose Sound Rox_Audio to display the Attribute Editor for the sound. 6. Set the offset to 1 instead of 0 to start the audio on frame 1 rather than frame 0. (Honestly, starting at frame 0 just gives me the willies.) 7. RM click the Time Slider (boy, we're sure doing enough of that lately!), and choose Set Range To Sound Length to set the Time Slider range to the exact length of your audio, which should now read frame 1 to 822. 8. Use the range slider to zoom in to a more manageable time range, such as 30-50 frames at a time that correspond to what the voice is saying. 9. RM click the Time Slider again, and choose Playblast (or use your hotkey or the menu selection). Be sure that the playblast will play in Movieplayer and not in fcheck. Windows Media Player or the QuickTime player will play the audio along with the visual playback while fcheck will not. To change from fcheck to Movieplayer, RM the Timeline and choose Playblast Options. Click the Movieplayer radio button. Now notice the audio in the context of your frame range. Once you playblast the scene or even click the Play button, the audio loads into memory, which makes scrubbing the audio back and forth rather speedy. Again, the file R o x _ F i n a l _ N o _ A n i m . m b in the s c e n e s folder of the R o x _ l i p _ s y n c project on the CD-ROM will bring you up to speed. All this file needs are the mouth shapes modeled into the blend shape objects already laid out for you and the final animation. Overall Mouth Movements Grab the slider in the Timeline and scrub the audio to get a sense of the gross or overall movement of Rox's jaw. Grab the lower jawbone and begin rotating it up and down to match the audio. Don't rotate too far down; a little goes a long way. This first pass is to time the gross mouth movement properly. For a slightly more animated or cartoon feel, place some rotations in the upper jawbone as well. Again, a little goes a long way, especially with the upper snout movement. These rotations accentuate the lip-synch overall.
Hands-on Example 2: Photo-Real Facial Replacement and Lip-Synching
Figure 5.41: Adjust the preferences to display the waveform.
161
Figure 5.42: The Blend Shape dialog box with Rox's mouth shapes
When you get to the end of the current time segment, move to the next segments until you're done with the overall jaw movements. Playblast and make sure everything looks okay. Now you're ready for the lip deformations through the blend shapes. Mouth Deformations Once you have the jaw movements timed correctly, the next step is to use the lip deformations to make the different mouth shapes. Choose Window Animation Editors Blend Shape Editor to open the Blend Shape dialog box, as in Figure 5.42. Now comes the fun part. Having spent oodles of time setting up and crafting our blend shapes, we're ready to spend oodles of time pushing some sliders and keyframes around. Basically, and this is about all there is to say about this stage, use the Blend Shape dialog box to match the mouth shape on the model to the sounds in the audio. The more you do it, the better and faster you get, but here are some guidelines: • Use a combination of mouth blend shapes to achieve a particular sound, phoneme, or vowel. • Mouth out the phonemes yourself slowly and pay attention to how your mouth forms before, during, and after the word or sound. Don't worry about looking crazy at your desk as you talk to your keyboard. We all do it. • Mouth shapes will differ between two instances of the same exact word depending on the context in which they are spoken and which words follow or precede them. • Try not to go from one lip shape to another in less than two frames or more than four frames. There are always exceptions to this rule, but be careful. You don't want the lips chattering too fast or moving in slow motion. • If a sound is being held for a while in the audio file, make the mouth shape and hold that shape, but add some slight movement in the lips and/or jaw. Never allow yourself to keyframe between two jaw movements/blend shapes in more than four frames, though. That would make it look as if the lip-synch is animated in slow motion. • Run a first pass for each 30-50 frame section, playblast it with the audio on, and then go back and adjust the joint rotations and blend shape keyframes to finesse your work.
162
C H A P T E R 5 • Lip-Synching Real-World Projects
Rendering the Snout Yikes! That was a lot of convolution. Photo-real mouth replacement is a multistep and fairly complicated process. You might think we're almost done, but there's more! The next task is to render out the snout. Turn off the image plane and render out the sequence you've finished. You'll then need to import your frames into your favorite compositing program, such as After Effects, combustion, Chalice, Shake, and so on. Since this is a book about Maya and not compositing, we'll only get into the concepts of the final comp. You'll need to composite the rendered sequence with its alpha channel over the background plates of the subject. Since you've already tracked the rendered sequence, it should fit over the plates perfectly at all frames. With Rox, we'll fit our rendered frames over a still image. Notice, however, that when the lower lips or jaw moves upward a bit, the cat's closed jaw from the plate beneath it is revealed. This situation forces us to paint parts of the cat's mouth out on the plate so that the moving jaw will reveal open space or the cat's neck. With a single frame, it's easy enough to paint these things out in Adobe Photoshop or another image editor. With a sequence of images for your plate, however, you'll spend a good amount of time painting and tracking bits of the real subject's jaw and mouth out. When you have painted out parts of the live action snout that will be revealed by the moving CG snout, comp the CG back on top. You may need to run some sharpening filters or color correction to match the plates. If you notice your snout rendering out too soft compared with the background plate, try turning off the multipixel filter in the Render Globals and the filter type for the file map of the cat in the model's shader (see Figure 5.43).
Figure 5.43: Turning off the filter type will help sharpen the texture file when it's rendered.
Final Touches You may have noticed that Rox doesn't have any whiskers. Well, we removed them. Before you call the ASPCA on us, we painted them out of the background plate in Photoshop. It would have looked mighty weird doing a mouth replacement without having painted her whiskers out. When the cat begins to talk and the snout deforms, the whiskers won't move, and that would look wrong. Now, however, we need to replace the whiskers with CG whiskers. You can do so with Paint Effects or geometry that is attached to the snout model, among other ways. We'll leave that up to you. What ultimately separates a good job from a bad job is the level of detail. By combining multiple layers of simplicity, you can create a whole of complexity that is elegant and professional looking. This example has taken us through essentially only one-third to one-half the battle in mouth replacement, the
Summary
Figure 5.44: Rox!
setup procedure. The rest of the battle lies in the animation that flows from your heart to your mouse to the screen. Sounds all New Age, but it's true. Only about the third or fourth time doing this example will you start becoming adept, so don't give up. Keep at it as long as it's fun or it pays. Figure 5.44 shows a still from the final animation ( R o x _ T a l k s . m o v ) , which is available on the CD-ROM.
Summary Although lip-synching isn't the "sexiest" animation job, it is extremely visible and difficult to get just right. Fortunately, Maya provides a tool set that makes this job much quicker and more accurate, especially for longer or series projects in which large amounts of dialog are spoken. Lip-synching is time intensive and exacting, but does offer a number of creative choices and is rewarding when done correctly. Whether done as part of a stylized animation, as in our first hands-on example, or to make a "real" animal speak, as in the second, lipsynching is an effect best judged by the way it disappears from our consciousness after a few moments. Like so much in life, if lip-synching is done right, it looks easy and natural, belying the vast work that goes into creating it.
163
Creating Crowd Scenes from a Small Number of Base Models Emanuele D'Arrigo
of3D computer graphics, nothing was scarier than dealing with lots of objects and polygons. The limited and expensive hardware of those years, together with the usually highly experimental, custom software, introduced pioneer artists to the concept of "rendering time." A long pause in the interaction with the console, sometimes minutes, more often hours or nights, was necessary to process the 3D data generated by the artist to produce a 2D array of pixels, the final picture. These pioneers watched with disbelief and amazement as simple objects happily flew around on the monitor after a night of rendering. And rendering time was strictly related to the number of objects and polygons in the scene. Well, I was barely a baby at the time, but what I fust described really hasn't changed all that much. Minutes, hours, and nights of rendering time must pass before we can see our animations. And this time is still primarily related to the number of objects and polygons in the scene. What did change, though, is the output. And what a change! Fueled by individuals and institutions interested in exploiting the potential of3D computer graphics, those early experiments became a billion dollar industry with almost no limits to what talented, creative people can visualize. If displaying a simple sphere on the screen was once a miracle, nowadays
168
C H A P T E R 6 • Creating Crowd Scenes from a Small Number of Base Models
directors, producers, and artists themselves, while constantly fighting budget and time constraints, at least have complete freedom of visual choice. Natural forces such as water in The Perfect Storm and asteroids in Armageddon and Deep Impact unleashed hell on the screen. Dinosaurs with jiggly muscles and wrinkled, colorful skin have been brought to life on live-action background plates in a seamless integration, as in Jurassic Park and Disney's Dinosaurs. With realistic 3D human characters deformed in The Mask, violently crashing cars in Meet Joe Black, and acting in a fully 3D environment in Final Fantasy, the ultimate challenge has been met and defeated. Or has it? Of course not. Many brand-new challenges stemmed at one point or another from the brilliant minds of many individuals and teams involved in the industry, always trying to recreate again and again that sense of amazement from the early days of computer graphics in themselves and in the audience. In some cases, it's "just" a matter of new algorithms being discovered and implemented in the software: inverse kinematics, displacement, radiosity, and other new programs are thrown into the battlefield, allowing unprecedented feats in modeling detail, animation style, and rendering realism. In other cases, however, it's a matter of mass: the collective size and complexity of the sheer number of objects in a scene simply overtaxes computers.
Not Only Big Studios Be it jetfighters versus space fighters as in Independence Day or crawling masses as in AntZ, crowd scenes are now in the range of any production. Not only because the tools offered by currently available software and the relatively cheap hardware allow animators to deal with them, but especially because it doesn't take an army of thousands as in The Lord of the Rings. Consider a "simple" TV commercial production or a music video. The budget is small compared with any feature film such as those mentioned earlier. Nevertheless, the storyboard calls for a shot—maybe more than one, in the characteristic uncertainty of TV productions— with 20 or 30 characters doing something such as running on a street. At first glance, 20 or 30 characters may not seem that many. If they're characters in the most abstract sense, including inanimate items such as cars, it's reasonable to think that a modeler and an animator could handle the shot by simply keyframing each character. Of course, life is rarely so easy: our example shot is not about cars but about two-legged, twoarmed, (luckily) one-headed, colorful aliens running a marathon on a Golden Gate-like bridge. In this situation, it's still not easy to completely exclude a manual approach, in which modeler and animator painfully create and give life to all characters. Twenty or 30 characters is a threshold number, right on the fine line where more complex but more powerful and flexible procedural methods become time and cost effective. How, then, do you decide which method to use in each case? Well, something I learned relatively early in this business is that making a shot is not just a matter of doing it. It's a matter of doing it in a way that it can be changed easily. Directors and supervisors all have ideas about what they want to see from you. Some know exactly what they want—but unfortunately you didn't take Telepathy 101 in high school— while others don't have a precise idea of what they want and will need to see something from you to decide that it's not what they wanted after all. In both cases, a trial-and-error process has begun, and a tradeoff between setup time and the consequent increased flexibility needs to be found. Human resources and money are the biggest elements to consider when dealing with crowds of a few elements, but soon, when the number of characters or their complexity
Heroes
169
(or both) increase, the manual approach simply fades out of range, leaving a procedural approach as the only way to go. As Figure 6.1 shows, four levels can intuitively characterize the complexity scale. At Level I are characters with no moving parts, for example, asteroids or spaceships At Level II are characters that have few moving parts, for example, cars. At Level III are characters that have an articulated spine, such as fish. And at Level IV are fully articulated characters, that have an animated spine and appendages, such as humanlike figures. The good news, though, is that once you take the procedural path, and develop the solutions to a plethora of sub-problems or find the solutions in existing software, the number of characters you need to handle doesn't really matter anymore. Our hypothetical director might come up to you one Number of characters morning and say: "Wow, I saw your shot with the 30 running aliens. Really cool, Figure 6.1: The threshold defining when procedural methods should but... could you make... say... 100? You take over from manual methods is actually a fuzzy line that depends know, it would really help the story!" on the number of characters and their complexity. You notice the grin on his face, and it's clear he enjoys the moment: he's just waiting for you to collapse, your sunny morning ruined by the dark night he thinks you're going spend in the office. But then, with your most genuine and happy smile, you reply: "Sure, no problem. You'll see the new version tomorrow, during dailies." At this point, his jaw should drop somewhere on the carpet, and you can go back to your cappuccino and croissant breakfast.
Heroes A crowd is not just a group of characters. A crowd is a character in and of itself. A crowd acts by itself as a single entity, sometimes as the main character of the shot and sometimes as a background element. For example, let's consider a shot that portrays a stadium filled with people or furry pink elephants with bright yellow dots—never limit your creativity. Let's imagine that two characters are talking to each other in the foreground, well in the frame. Unless there are special needs, they will probably get all the attention, while the audience swiftly discards the crowd as a background element, even if animated. Then, let's imagine a second shot, with no elements other than a wide view of the crowd in the stadium. In this second case, the audience's attention is focused on the crowd as a whole. The process of directing the audience's attention doesn't just happen, though. You must carefully and intentionally mold the process for effective storytelling. Or at the very least, you must be aware of where the attention of the audience is and is not.
170
C H A P T E R 6 • Creating Crowd Scenes from a Small Number of Base Models
For this purpose, two dramatically important perception laws help us understand where our eyes—and minds—tend to go when we look at something: • In a still scene, the human eye focuses on something that moves. • In a moving scene, the human eye focuses on something that is still. These laws, evolutionarily hardwired in our brains (probably from ancient times, when something moving was either something edible or something that considered you edible) give a good indication of what our eye is going to do. For example, if a shot shows a group of running characters, and one of those characters stops, our attention is immediately drawn to that character. On the other hand, if a shot shows a close-up of 50 pink elephants in a football stadium, and one suddenly stands up, screams, and gestures to the referee, you can bet that our attention is immediately drawn to that particular pink elephant. Although these laws apply to the staging of any scene, including live-action films and theatrical presentations, they are effectively inversely important for a crowd: to focus audience attention on a single detail or individual is exactly what you don't want when animating and creating a group of characters. The focus is not supposed to be on a detail, on one individual doing something different from the others. If the story actually requires such a distinction, that character or those characters are not "technically" part of the crowd; instead, they are "heroes," and you must handle them separately, most likely manually, with the standard tools offered by the software. This does not mean, however, that a crowd doesn't need details. Au contraire! Modeling, animation, and rendering can and should add details to the characters in the crowd. Our pink elephants, for instance, could all wear the tiny bright yellow hat of the team they're supporting at the stadium. Or they could all flap their ears continuously, to overcome the intense heat of the hyper technological stadium unfortunately constructed in the middle of the jungle. But none of them should visually stand out, say, with a bright green hat or nonflapping ears, unless there's a reason for them to be the focus. You must visually balance details in an overall uniform fashion, characterizing the crowd, not the individuals that make it up.
Deploying Three Task Forces To deal with a crowd scene means to handle it and treat it on two levels: at the crowd level and at the individual level. Although the crowd is a single entity, it is made out of many characters that supposedly have a life on their own, and somehow you have to show this. For example, a flock of birds normally flies in one direction, but the similar and yet never identical movements of each bird differentiates it from the others, giving a nice, characteristically asynchronous look to the flock. This concept is not limited to animation, but is easily extended to modeling and rendering. For example, each bird in the flock is different in size and proportion, not to mention the color patterns of the feathers that differentiate males from females from juveniles. Therefore, at the crowd level, you must decide how the crowd moves in its entirety; at the individual level, you must decide how each individual looks and moves with the crowd. Although you can treat the crowd level primarily as a single task, being "just" about position and orientation of each individual frame by frame (with some exceptions), at the individual level, you deal with the appearance of individuals and how they propel themselves (legs, tentacles, turbines).
Deploying Three Task Forces
We can then broadly identify three tasks: 1. Creating a flexible crowd system to handle the crowd as a whole, generating position, orientation, and status tags for each frame for each individual 2. Creating and/or differentiating the characters as needed 3. Animating the characters procedurally in any of their properties according to the information provided by the crowd system Once a production group identifies these tasks, it can divide the crew into teams, each assigned to a single specific task. The first team has to think about a set of methods to animate the crowd as a single acting character (task 1). Although they definitely need to know what the crowd is composed of (a platoon of futuristic hovercraft tanks riding some desert sand dunes moves slightly differently from a school of tuna fish), they don't really need to worry about the appearance of each single character, other than maybe considering its volume to avoid unrealistic interpenetration. At this level of abstraction, they'll pre-visualize the shot with simple spheres or boxes of various sizes and proportions. The second team, in the meantime, can start working out the details of the source models—one or many—and their rendering properties (task 2). Once the original model is ready, this team must develop methods to duplicate and differentiate the models. In some cases, this step is simply a matter of minor color differences. In others, the geometry is affected, and major color differences are appropriate. The goal for these variations is to provide enough visual variety to the usually static property of the characters that they seem to be individuals, not duplicates. Team 3 is responsible for animating the individual characters in the crowd (task 3). Although not always necessary—for instance, if the crowd is actually a procedurally driven Formula One race—the animation team is a must if the characters are to have limbs or any other nontrivial moving part. The responsibility of team 3 is to move the body according to the information provided by the crowd system, most notably the speed and rate of turn, easily extrapolated by position and orientation values changing over time. Team 3 must also handle status tags, which I'll discuss later, because of their potential in changing the position of the characters' moving parts. How this is done depends on the typology of the crowd, ranging from a fully procedural to a semi-automatic approach, in which libraries of handanimated character motion are seamlessly blended by procedural means. I'll cover these topics later in this chapter. At this point, you may have noticed that something is missing. Who takes care of the skeletons, constraints, and expressions that compose each character setup? Although the best solution is for a fourth team to take care of that specific task, the schema outlined in this section is just an ideal path to diverge from as soon as the details of a production require it. The first team could handle this task, for instance, because of some strict connection between the crowd system and what each individual does, but most likely the choice will be between the animation team and the modeling team, with a preference for the former. The animation task force is likely to have more experience animating characters, and they're the folks that are supposed to use the character setup. Given this broad overview, it is important to realize that even if we can draw lines between these primary tasks, they are still intertwined. They require a good level of communication between the teams, a need that grows exponentially with the complexity of the individual character and the richness of the crowd behavior. Let's now proceed with the details of each team's to-do list and how to tackle them.
171
172
C H A P T E R 6 • Creating Crowd Scenes from a Small Number of Base Models
The Genesis Team The Genesis team (team 1) is responsible for the modeling, setup, and diversification of the characters. Modeling a character for a crowd is somewhat similar to modeling a character for a video game: the models must have a low polygon count, and the textures provide most of the details. Although in the movie industry, we have some advantages over our cousins in the video-game industry, who are forced to render a relatively high resolution frame in l/30th of a second, it is good practice to keep the overall complexity of a crowd character as low as possible. For the example in this chapter, I set myself a target mesh of about 2000 polygons, knowing that the overall number of individuals I wanted in the final scene is in the 100-200 range, for a worst-case scenario of about 400,000 polygons. These numbers aren't really something I found with a magic formula precisely stating how many characters and polygons my dual 800MHz workstation can handle. But anybody who has worked intensively enough in this business will know about how much can be asked of hardware and human patience. Human patience tends to be the most important issue when working with a crowd scene. During the production of Help! I'm a Fish, a Danish-German animated feature I had the pleasure to work on, some scenes had a loading time of 20 minutes, a change to the crowd—whose properties were mostly expression-based—took another 20 minutes, and saving the scene took an additional 20 minutes—on a then powerful 250Mhz SGI Indigo2, with 300-600 models with 200-400 polygons each. In an ideal 10-hour day (no lunch, no coffee, no nervous breakdown), that's a total of 10 changes, not really many if you're not at the final, polishing-touches stage and you know exactly where you want to get. The biggest problem facing an artist working on a crowd scene is not so much rendering time, which is becoming cheaper and cheaper every day with larger render farms, but the degree of interactivity with the 3D scene, a bottleneck created by the growing but far more limited power of graphic cards. Later in this chapter, I'll discuss modular strategies for overcoming these problems. For now, let's focus on the Zoid, the character we're going to use as the base model for our crowd and shown in Figure 6.2. (You can find the Maya file for this scene, Zoid_base.mb, on the CD-ROM.) The Zoid will be the protagonist of a shot, together with about 200 of his brothers, portraying a crowd sitting on a bleacher in a basketball court and making the wave. As you can see, in Figure 6.2, the polygonal mesh is simple: 2000 polygons, 4000 vertices, and 53 bones. As a rule, I tend to avoid NURBS or subdivision surfaces for crowds, to save the renderer from the tessellation step, given that most of the time polygons are enough for models of this type. A three-finger hand is in place for our Zoid, with bones for each finger, but I could have easily saved another eight bones using two fingers only, one for the thumb and one for the four others. Time is a tyrant, and any step that saves time is worth considering. Some expressions connect two extra attributes on the forearm (elbow joint) to the closure of the hand and thumb in a fist; their purpose is to help the animator pose the character for the animation library. Those expressions though, as any other static output expression, shouldn't end up in the final scene. Each expression is evaluated in every frame, no matter if the result will actually change or move anything. On a handful of characters, leaving expressions in the scene easily might be forgiven. In a crowd of 200 characters, in a 96-frame scene, 12 expressions in the original Zoid (one for each finger joint) means 200 x 96 x 12 = 230,400 evaluations, a rather large waste of processing power if it can be avoided with no loss in the visual output.
Deploying Three Task Forces
Figure 6.2: The skin, skeleton, and shader tree of a Zoid.
As you can see in this simple example, in the creation phase we must be really careful with the apparently small numbers of features of the base character. We must try to limit model and setup to only what's necessary and unavoidable, and we must flag some characteristics as removable by the people downstream in the production pipeline. Additional utilities for the animators are six IK handles—two in each ankle, two at the tip of each foot, and two more in the wrists. Two nulls, or locators (dark green), control the pole vector of each arm's two-bone chain.
1 73
174
C H A P T E R 6 • Creating Crowd Scenes from a Small Number of Base Models
Now, let's look at the shader tree. It too is simple: eight nodes in all. Starting from the top node, a Lambert node, the tree splits in surface colors and a fake rim light. The rim light is given by a Sampler Info node whose facing ratio output is piped in a black to red ramp. Although this might sound complicated, it is actually nothing more than a bare X-ray shader, the only difference being that its output is directed to the incandescent input of the Lambert node instead of to the transparency. In the ramp itself, moving the colorEntry[2] up and down changes the overall thickness of the red rim, faking a back light behind the character. The two surface colors—yellow and blue—are provided by two monochromatic ramps mixed together by a layered texture node. A third input to this node is a texture file, painted on the 3D model with Maya-integrated painting tools and then saved as a 1024 x 1024 . t g a picture ( b a s e _ h u m a n o i d S h a p e . t g a on the CD-ROM). The yellow color is placed on top of the blue background color using this file as a mask and generating the effect of a yellow pattern on the blue skin of the character. Figure 6.3 shows the three colors of a Zoid. The character is encoded as a Maya binary (*. m b ) file. I have a rule of thumb about this, not unusual in the 3D industry, and that's why I'm mentioning this apparently trivial topic. Maya binary files load faster than Maya ASCII files. The Z o i d _ b a s e . m b file loads two to three times faster than the same file saved as Z o i d _ b a s e . m a . Saving your finished model files in Maya binary is a good idea because these files usually contain thousands of coordinates for vertex positions and numbers for topology information. You can then switch to Maya ASCII when you put together your entire scene with all models, especially when the scenes become complex and have references in them. As you know, you can read and edit an ASCII file using a simple text editor. This can help in at least two situations. For example, a scene has 50 references to model files. If you have an ASCII file, you can switch from low-resolution to higher-resolution models without opening the scene. Simply use the Replace function available in any decent text editor, and you can start right away with the highresolution render. (See Figure 6.4.) Here's another example: Something goes horribly wrong, and your extremely precious scene is corrupted. You can open an ASCII file and debug it manually to find Figure 6.3: The three colors characterizing a Zoid and correct the line that generated
Deploying Three Task Forces
Figure 6.4: The first lines in a *. ma file include the path and filenames of the referenced models. Replacing them with the help of a text editor updates the scene without the need to open it in Maya, a time-consuming operation when the crowd of characters is in place. the loading error. In the worst case, you will lose only the corrupted lines instead of the last two hours of work. A Word about Architecture Before we look at the scripts in this chapter, I want to give you some details about their overall architecture and discuss the guidelines I follow when writing a script. First, I require that each script act either on a selected group of characters or, if nothing is selected, on all characters. Such flexibility is important. For example, the crowd is in place, but you don't like a few of the individuals. You can run the script more than once, and only the selected characters will be affected. Sometimes, instead, the overall effect is not right, and in those case you'll need to change everything in the scene by running the script on all the characters. Second, I must be able to run every script in a GUI mode and from the command line. In the Z o i d _ h u e V a r . m e l script, for instance, I can call the global procedure Z o i d _ h u e V a r G U I ( ) and access it through a button on one of the shelves. Clicking the button opens a simple promptDialogue window that displays a tiny prompt line with default reliable values and some hint of their meaning. The user can enter help instead of the values to get a more extensive description.
1 75
1 76
C H A P T E R 6 • Creating Crowd Scenes from a Small Number of Base Models
This script admittedly doesn't have a fantastic GUI. I prefer to keep interfaces as small as possible and focus on writing a robust core procedure, the one actually doing something to the scene, in this case Z o i c L h u e V a r ( ). Once the script is working correctly and is properly debugged, there is no harm going back to the GUI code and improving it or having a GUI programmer take care of it. The command line becomes important when things get slow. After the testing is done and all the useful input values and ranges are known, you can use the interactive GUI version of the script to modify the crowd and all the variations of its individuals. But this can take a while if, for example, you have 1000 Zoids in a scene. Even if the system can handle \ 000 Zoids, it would surely take its time doing so. Suppose that you calculate the runtime at about 6 hours and that it should finish about 2 A.M.. And you need to run another script after the first one, which will also take some time. You don't want to be there at 2 A.M. to run the second script, do you? If the scripts you write are GUI based only, you have no choice but to wait, miss your son's evening basketball game, go home about 3 A.M., and face a spouse who's not exactly happy. Solution: the script must always have a GUI-free "core" procedure, ready to run from the Maya command line, the Maya Script Editor, or from another script. With this method, you can stack two or more scripts to run one after the other, even more than once, without your assistance. The last issue I want to mention about the architecture of these scripts is an important argument regarding anything meant to run for a long time, a usual occurrence with crowds. At least in the interactive, GUI version, a script must generate goodly amount of runtime output (maybe with different "verbose" levels) and statistics about what it is doing and how fast it is doing it. With a crowd scene, a script can easily run for ten minutes, one hour, or all night. Therefore, I usually generate two types of output: • A line of text for each character processed • A report of the start and end times In the main loop of the script, I usually generate a line of text for each character processed, for example, "Processing Zoid27... done! - (27 of50)". The frequency of this output can range from less than a second to a few minutes. If the frequency at which a new line of output is printed to the screen is lower than this, generate additional lines in key points of the code, most notably after the instructions that require the longest time—for example, loading and unloading of big reference files. If the frequency is higher, and the output is literally racing down the screen, it's wise to generate output only every 10, 100, or 1000 loops or every 5, 10, or 25 percent of the total number of loops. Furthermore, at the beginning of the script, I usually set a variable with the start time. When the script finishes, the start time and the end time are reported together so that I can check when and for how long the script has run. Here's what can happen if you don't generate such output. You run the script overnight, and during dailies, the director or the supervisor asks you to make a change. "No problem," you reply, but when they ask how much time it will take and if the executive producer can see it before lunch, you have to answer evasively because the script that started at 8 P.M. could have finished at 9 p.m. or at 7 a.m. the next morning. Generating output prevents this situation. Listing 6.1 is an empty template, the basic framework on which I usually structure my scripts. Easily recognizable are the two main procedures, the user-input handling and the output-generating lines of code. / Listing 6.1: An Empty Template // t e m p l a t e S c r i p t . m e l - v l . 0 0 . 0 0 // by Manu3d - © 2002
Deploying Three Task Forces
global
proc int c o r e P r o c e d u r e ( f l o a t float float int
$arg1, $arg2, $argN, $verbose)
/ / h a n d l i n g the s e l e c t e d o b j e c t s string $obj; s t r i n g $ o b j L i s t [ ] = ' l s - s l -o - s n ' ; int $ o b j N b = ~ s i z e ( $ o b j L i s t ) ' ; string $time1; i f($verbose) {
p r i n t (" - - - \ n") print("Zoid_sizeVar.mel output starts here.\n") print(" - - - \ n") : $timel = "system("time /T")';
// c y c l i n g through the selected objects for($i=0;$i<$objNb;$i++) I $obj = $objList[$i]; if($verbose) print("Object " + $obj + "..."); // ---- ALL the action STARTS here - - • - ALL the action ENDS here // Always output the script progresses. if($verbose) p r i n t ( " Done! - ("+($1+1)+" of "+$objNb+")\n'
/ / reporting f i n a l s t a t s if($verbose) {
s t r i n g $time2 = 'system("time /T")'; print("--- - \ n") ; p r i n t ( " M a i n loop begun: " + $timel); p r i n t ( " M a i n l o o p ended: " + $time2); print(" - - - \ n") ; print("coreProcedure.mel output ends here.\n"); print("-- - \ n") ;
} return 0;
continues
177
1 78
C H A P T E R 6 • Creating Crowd Scenes from a Small Number of Base Models
g l o b a l p r o c int c o r e P r o c e d u r e _ G U I ( )
i{
// creating the user interface with default inputs. string $result; $result = 'promptDialog -t "coreProcedure vl.00.00" - m " arg1 arg2 argN (or \"he1p\")" -tx "1.0 10.0 100.0" -b "OK" -b "Cancel" -db "OK" -cb "Cancel"' ; / / h a n d l i n g the u s e r inputs string $inputLine; if ( $ r e s u l t == " O K " )
{ $inputLine
= 'promptDialog - q ' ;
} else { w a r n i n g ( " T h e u s e r c a n c e l l e d the a c t i o n . \ n " ) ; r e t u r n 1;
string $buffer[]; int $bufSize = 'tokenize $ i n p u t L i n e " " $buffer~ if($bufSize < 1) error("At least \"help\" expected.\n"); // h a n d l i n g the h e l p request if($buffer[0] == " h e l p " ) ( print("Help - b 1 a b 1 a b 1 a \ n " ) ; print("Help - blablabla\n");
// finally r e t r i e v i n g the actual inputs float $argl = $buffer[0]; float $arg2 = $buffer[l]; float $argN = $buffer[2]; coreProcedure $argl $arg2 SargN 1; return 0;
Creating Variety Why use two monochromatic ramps, a black-and-white mask file, and a layered texture if painting the yellow spots directly on a blue background would lead to a single file and a single node? If we could get by with blue and yellow characters only, that would be a good,
Deploying Three Task Forces
clean method. But if we want procedural chromatic control over our characters, splitting the colors in three ramps (rim included) is crucial. The goal of the script Z o i d _ h u e V a r . m e l (on the CD-ROM) is to change one of the most visible features of our characters: their colors. Color variation is actually one of the best ways to create any crowd, even if the geometry of the models is identical. Differences in color, from a distance, is enough to turn a CGI-looking, clearly duplicated bunch of identical characters into a realistically variegated group. There's a glitch though. Most of the time selecting a randomcolortoreplace an existing color is not enough. I used the scene file Z o i d _ 1 6 a r r _ h u e . m a (on the CD-ROM) to test the script Z o i d _ h u e V a r . m e l , and I invite you to do the same. Sixteen Zoids are comfortably placed in a 4 x 4 array, ready to have their color changed by the script.
Running a Script To run any script In this chapter, follow these steps: 1. Type the two following lines in Maya Script Editor, opportunely customized for your needs: source "myPath/myScript.mel"; myProcGUI; You want to run the myProcGUI procedure from the file m y S c r i p t . m e l . 2. MM+select both lines, and drag and drop them on a shelf. A new MEL icon becomes available. 3. Simply click the icon to run the script. A small prompt window will pop-up requesting your inputs.
As I mentioned, varying one or more colors of a character doesn't mean replacing an existing color with another randomly chosen color. Often the art department chooses the colors, and you are allowed to change them only slightly. In other cases, colors can be quite different from one another but must be chosen from a precise color palette. A third possibility, sometimes overlapping with the first, allows for wild variations, but requires that one color maintain the "chromatic distance" between the hues of the various colors (see Figure 6.5). For example, if one of the colors is a dark orange (hue 20), and another is a bright green (hue 100), a shift of +80 to both of them will maintain the chromatic distance, turning the first color to a bright green (hue 20 + 80 = 100) and the second to a bright cyan (hue 100 + 80 = 180). If you choose the starting colors properly, any parallel random shift in the hue generates an equally nice and rich combination. For example, try to change our Zoid's three primary colors, manually modifying the shader tree in the file Z o i d _ b a s e . m b , and test different starting combinations, such as two colors relatively close, say, two shades of purple, and a complementary one, a yellow. The script has two main working modes: absolute and relative. If a number is the first parameter, the mode is absolute. In this case, the new hue will be generated choosing a random value in a numeric range centered on the first parameter and sized by the second. For example, if the first parameter is 100 and the second is 0, the random range is collapsed to a single number. Therefore, all Zoids will have the same color, a bright green (hue 100). If, instead, we use a size of 180, the range is +180 centered on 100, covering the entire hue cyclic spectrum (0-360), guaranteeing the wildest color variations. If the word "rel" is the first parameter instead of a number, the size of the random range is defined by the second parameter as in the earlier case, but the center of the range
1 79
180
C H A P T E R 6 • Creating Crowd Scenes from a Small Number of Base Models
retains the hue of the Zoid currently being processed by the script's main loop. Therefore, if the Zoid is yellow (hue 60) and a range size of 60 is allowed, in the two extreme cases, the Zoid main color becomes red (hue 60-60=0) or green (hue 60+60=120), while another Zoid with a different color will have different extremes (see Figures 6.6 and 6.7). No mode is better than the other in this script; they're simply different. If you need only small variations of a precise color, use the precise hue of that color, and then keep the range size small. The relative mode might be the right choice if you have, for example, two groups of Zoids, one in orange Figure 6.5: The RGB color wheel: two colors keep their and the other in blue, and you reciprocal chromatic distance if the distance between their want to make the individuals hue values does not change. slightly different without shifting too much from the color they have already. The relative mode and a small-sized range will keep them mainly orange or blue, but the different shades of those two colors will make the individuals different and yet of two distinctive groups. The final input parameters allow us to choose the color affected by the script. Each Zoid has three colors: the main one, by default blue; the yellow spots pattern; and the red fake rim light. Typing the keywords "main," "pattern," and "rim" separated by spaces affects all three colors. Although their hues are different, their random shift on the color wheel will be uniform in any case (You can change this behavior by uncommenting lines 70, 102, and 134 and commenting the line immediately following each of them.) The use of the single word "all" has the same effect. Any subgroup or even just one of the colors can be affected. You can decide, for example, that you want to keep the default yellow color for the spots pattern, but allow both main color and rim to change slightly. Or maybe you just hate the red rim light, and you want to change it to different shades of green. Simply type one of the keywords "main," "pattern," or "rim" to tell the script which color should be affected.
Deploying Three Task Forces
6.6: Wild co/or variations—Input values: rel 180 all
181
Figure 6.7: Subtler color variations—Input values: rel 60 all
A few changes to the script Z o i d _ h u e V a r . m e l can lead to two additional scripts, one varying the saturation and the other varying the value of the colors. Only one difference must be taken into account: hue is cyclic, meaning that a hue of 360 is the same red as hue 0. Instead, both saturation and value have little meaning out of the 0.0-1.0 range, often generating artifacts and unwanted effects when pushed beyond those limits. You must consider this when generating new colors, eventually clamping the result to fit in this range.
Size and Proportions Color can undoubtedly differentiate characters, but so can shape. In real life, size and shape are a great ways to tell the difference between individuals in a crowd. Although animals in the same species tend to look alike to the untrained eye, most people can quickly pick up even subtle differences in size and proportions between two human (or humanlike) figures. The purpose of the script Z o i c L s i z e V a r . m e l (on the CD-ROM) is to create size and proportion differences in the Zoid models. Mostly acting on the bones of the character skeleton, the script changes the proportions of the limbs, torso, and head in a limited, cartoonish—but reasonable—manner (see the results in Figure 6.8). You can test this script with the Maya file Z o i d _ 1 6 a r r _ s i z e . m a on the CD-ROM.
Most of the parameters for these changes are internal to the script; there are too many of them for the simple interface I had the time to develop. The only input the user needs to provide is the minScale and maxScale, loosely defining the final height range of the Zoids. A
182
C H A P T E R 6 • Creating Crowd Scenes from a Small Number of Base Models
plethora of things happen in the script though. First, the length of the Zoid skeleton is modified. He gets taller or shorter, the ratio between the length of his legs and the height of his torso are modified, and the length of his arms is changed. Although some variations are allowed, a general rule hard-coded in the script is that the length of the limbs is proportional to the height scaling factor of the Zoid. This rule guarantees that a tall Zoid will have decently long arms and that a small Zoid will have shorter arms. For instance, the formula behind the scaling of the arms is the following simple one from line 60 of the script: $ a r m s L e n g t h = ( ' r a n d 0 . 9 1.2' * $ r n d H e i g h t ) ;
Here $ r n d H e i g h t is the overall random height scaling factor. As you can see, a random number is involved, to make sure that two similarly tall Zoids won't necessarily have the same arm lengths, but the final number is also connected to the Y scaling of the character. Y scaling, however, is not the simple scaling value of the Zoid's top transform node. You can scale the character only by changing the scale attributes of its joints, and the overall height is actually a composite of the length scaling of the legs and torso joints. This makes the object space uniform from Zoid to Zoid, allowing for easier debugging of the script's simple math in case of weird results. Additionally and more important, a nonproportional scaling of the character's top node, say «1.0, 2.0, 2.0», would lead to warping of the joints and the overlying skin, an effect not immediately noticeable if the character is in the neutral position, but easy to spot as soon as there's a left/right or forward/backward tilt of head and torso. In the phase I just described, legs and arm joints are scaled proportionally on all their axes. This process is quite similar to scaling the top node, which generates a proportionate character, simply bigger or smaller. At this point, the script forks in three possible flows, through the long If block between lines 109 and 197. In 30 percent of the cases, nothing happens. The Zoid you now have is what you'll see in the scene, and no further changes are made to its bone structure. In 35 percent of the cases, you'll get a strong Zoid. The thickness of legs, torso, and arms is increased, basically through simple change on the Scale Y and Scale Z attribute of each joint. The X scaling, or length of the joint, is untouched. The result is usually a rather muscular Zoid. In the remaining 35 percent of the cases, the thickness of the limbs and torso do change, just in the opposite direction, generating thin, skinny Zoids. These percentages and scaling values embedded in the code are a matter of personal taste and testing. They do not have a particular logic other than that the arms shouldn't Figure 6.8: An example of the variations generated by the get long enough to touch the scnipt Z o i d _ s i z e V a r . m e l on the scene Z o i d _ 1 6 a r r ground level. How much distortion _ s i z e . m a — l n p u t v a l u e s 0.8 and 1.25
Deploying Three Task Forces
the characters could tolerate without badly affecting the overall figure is mainly a matter of stretching and compressing joints for each of them. The whole script is not much more than a little servant patiently doing what a human would do if they had to make variations to hundreds of characters and could survive the surely life-threatening boredom. Finally, the script changes the position of one object, the b u m _ a r r i v a l locator. This object is a weighted constraint dictating where the spineRoot joint, the root of the entire character's skeleton, is to be positioned when the character is fully standing. Its Y value (see Figure 6.9) depends on the length of the legs if you don't want the long legs of tall Zoids bent almost 90 degrees and short Zoids floating a good 20 percent of their height from the ground. With the colors and the shapes modified, we're done with the static characteristics of a model. Now let's see what kind of tricks the animation team needs to tackle the problem of the same animation used for many similar but never identical characters.
The Action Team The job of the action team is tricky. Usually, a different set of characters requires a different set of animations. The animators then simply take the models from the production library and manually adjust the keyframes of a base animation to fit each model's proportions and differences. Unfortunately, in this case there's no comfortable model library. As you know, the models of our crowd scene are continuous variations (as opposed to discreet variations) of a single base model, whose proportions are not known in advance other than in terms of ranges. How can this team proceed then? The first goal is to create a base animation. At this stage, the animator in charge shouldn't consider the problems that the downstream TDs will run into (a bit of foresight doesn't hurt though). The goal right now is "just" to have one or more nice pieces of animation to work on. The animation is in the file Z o i d _ a n i m a t i o n _ g o o d . m b (on the CD-ROM), but the question is, What happens when we apply this animation to a character whose proportions have been modified? The answer is that normally it won't work. Suppose that the height of our randomly resized character is twice the height of the base model. The legs will be heavily bent, because the spineRoot and foot joints are keyframed properly for the height of the base Zoid (see Figure 6.9). And the arms will have problems, because the point in space well above the head in the base animation is somewhere close to the ears of our grown-up Zoid. On the opposite side of the spectrum, a Zoid half as tall as the base Zoid will likely have both legs and arms fully straightened and feet dangling in the air, not able to reach the ground level. For our animation, we can therefore identify two areas critical to generalizing the base animation: the spineRoot joint—the root node of the skeleton—and the IK handles that control the arms' wrists. Without resorting to a script that analyzes the animation curves, somehow changing each keyframe with a highly customized algorithm, we can tackle the problem with the help of weighted constraints, an idea I had after an extremely informative discussion with my former colleague, Thomas Hain, about character setup. Let's start with the spineRoot joint.
183
184
C H A P T E R 6 • Creating Crowd Scenes from a Small Number of Base Models
Figure 6.9: The effects of the unadjusted base animation on Zoids with modified proportions. Whereas the base model would stand correctly, a tall Zoid has its legs bent, and a short one seems to be floating well above ground level.
Figure 6.10: Breaking the connection between the animation curves and the spineRoot joint (yellow arrows) and creating new ones (blue arrows) from the same curves to thelocator b u m _ _ a n i m
With the help of the Hypergraph, let's break the connection between this joint and its two animation curves (see Figure 6.10): select the two blue arrows connecting them to the joint, and simply press the Delete key. Now we can connect those animation curves to a newly created locator, which we will call b u m _ a n i m , and parent under the Zoid top node. In this way, the locator is in the same object space as the spineRoot joint. A simple MM drag and drop from each curve to the locator node opens the Connection Editor, in which the output of each animation node needs to be connected to the proper translation attribute. Finally, we point-constrain the spineRoot joint to the locator, leaving the weight of the constraint to its default value of 1.0. If you did everything properly, you shouldn't see any difference in the resulting animation. The spineRoot joint is still following the same trajectory in space, since the original animation curves are still involved. The only difference is that now the animation curves don't drive the joint directly, but through the point-constraint to the locator. Now let's create a second locator, b u m _ a r r i v a l , also a child of the character's top node. The spineRoot joint has to be constrained to this locator too, but let's change the weight of the new constraint to 0.0. Notice that this locator won't be animated. Now let's alter our script Z o i d _ s i z e V a r . m e l , inserting what is currently line 91, the line responsible for modifying the Y coordinate of this second locator to match the height of the hips when the legs are extended: setAttr ($suffix + ":bum_arrival.ty") (0.511 * $legsLength * 0 . 9 7 ) ;
Finally, let's keyframe the two weights so that when the character is fully standing, the spineRoot joint is fully and exclusively constrained to the object b u m _ a r r i v a l and so that when the character is seated, it's fully and exclusively constrained to the object b u m _ a n i m — o f course with smooth transition between these two states (see the animation curves in Figure 6.11).
Deploying Three Task Forces
185
Now, what do we have? At the beginning of the animation, the animation curves drive the motion of the spineRoot joint through the first locator/constraint, b u m _ a n i m . And since all characters in the final scene will have an unmodified copy of the original animation curves, they will all sit at the same level, no matter if their proportions have been modified. As soon as they get up though, the second locator, b u m _ a r r i v a l , becomes more and more influent, eventually locking the spineRoot joint in its position. Figure 6.11: Animation curves for the weights of the spineRoot constraints But that was the locator whose Y coordinate was changed according to the modified legs length; therefore, the ending height of the spineRoot joint will no longer be the one given by the base animation curves, but the one guaranteeing the character's legs full extension (see Figure 6.12). I used the same method to animate the arms. Originally an IK handle at each wrist drove both the shoulder and the elbow joint rotations. But then, similarly to what we did to the spineRoot joint, the animation curves needs to be transferred to a new locator, leftHand_translAnim, and the IK handle needs to be constrained to it. This time, though, two additional locators/constraints share the control over the IK handle with their weights. One is named l e f t Figure 6.12: On the right, a blue arrow traces the trajectory of the spineRoot T i g h t _ m a r k e r and is the initial joint of a base model. On the left, on a taller than average Zoid, the orange arrow position for the hand resting on the traces the original animation trajectory, and the blue arrow traces the actual lap. This locator is actually parmotion of the spineRoot joint, under the influence of b u m _ a n i m (brown locator), b u m _ a r r i v a 1 (yellow locator), and the animated weights of their constraints. ented to the leg joint in a way that, for the first frames of animation, when the legs start to move to propel the character upward, the hands are held there, as if pushing the upper body mass on the lower limbs. Then, the weights are smoothly changed, first to give full control of the handle to the locator holding the original animation curves, and then to a third locator, l e f t H a n d _ m a r k e r , positioned well above the character's head, to
186 C H A P T E R 6 • Creating Crowd Scenes from a Small Number of Base Models
guarantee the full extension of the arm. Also in this case, only one locator is holding the translation animation; the other two merely mark a position in space. In each case, only one locator is actually animated, but this is not a problem because the character stays in the arrival position for only one frame. If such a position had to be held for a while, a simple jitter of the arrival locators would prevent a dead-on-spot look typical of perfectly still 3D objects. With a character about one unit tall, as our Zoids are, a 0.02—0.05 units jitter would be enough. Finally, I created a character set for the animated Zoid and encased its animation curves in two separate animation clips, one for the upward movement, the other for the downward movement. Although the choice of a segmented animation turned out not to be useful in the scene produced for this chapter, you might use the concept in other situations, if you want to combine more clips in different chains of action. In the case described here, the idea was to hold the standing position for a short time, maybe have some jittering on the raised arms to avoid the characteristic dead-on-spot look, and then have the Zoid sit down again. But in the end, any hold longer than a few frames widened the wave pattern made by the crowd and resulted in an unnatural look that I simply decided to avoid.
The Horde Team At last! Our last line of defense, the TDs will pull the shot together! Their task is daunting indeed. First, they need to duplicate the characters, in some cases generating thousands of them, varying their color and size through the scripts previously developed. It's important to test the scripts thoroughly on a small number of individual characters, but only at this late step in the process are these scripts tested fully. A freshly generated and differentiated character, though, is not really helpful if it doesn't have at least an initial position. The initial positioning of each member of a crowd depends on the task. If you're dealing with a flock of birds, for example, it's reasonable to place them in a volume initially, and the same applies to a school of fish. On the other hand, this wouldn't be quite as reasonable for our legged characters, who are definitely ground-friendly creatures. Even on a surface, a variety of dispositions can occur. A marching troop is basically a regular 2D array. Or hundreds of Zoids might be sunbathing on the slopes of a hill, randomly scattered on its gently curved surfaces. In this example, our scene is located in an indoor basketball court ( s p o r t P a l a c e . m a on the CD-ROM). The crowd sits in the bleachers, which are arranged in rows on wide steps. (See Figure 6.13.) To populate the scene, I wrote a script that generates a single, arbitrarily long row of Zoids. The script Z o i d _ r o w C r e a t o r . mel (available on the CD-ROM) requires six input parameters. (Be sure to open the script with a text editor and customize the few lines holding a directory path.) The first parameter is the length of the row—not much to say about that. The bleacher in the scene is actually split into three sections separated by stairs. The two sections on the sides are about five units long, and the one in the middle is about six. The next two parameters are something we already know about. They're the minScale and maxScale input parameters to the script Z o i d _ s i z e V a r . m e l . In fact, Z o i d _ r o w C r e a t o r actually calls this other script internally, along with Z o i d _ h u e V a r . m e l , to manage all the possible variations described earlier in this chapter. The last three parameters define how far the Zoids are sitting from one another and what this distance is based on. If the last parameter is c e n t e r , the characters are placed side
Deploying Three Task Forces
Figure 6.13: The indoor basketball court, a few hours before the arrival of the audience
by side, measuring the distance from their centers. For example, if the random range is flattened to only one possible value, our friends will be evenly spaced along the row. And if the same flattened value is used at various times to create many rows, it will basically build a 2D array of characters, a bit like a troop in parade formation (see Figure 6.14). The setting I ended up using for the scene is instead w i d t h . This flag allows for closer but never interpenetrating characters, because the spacing defined by the two previous inputs decides the distance shoulder to shoulder instead of center to center, taking into account the width of each character. For example, if both min and max spacing values are set to 0.0, the shoulders of the characters always touch the shoulders of the Zoids on either side. A safer Figure 6.14: A platoon of Zoids 0.2-0.8 range places each Zoid's shoulders at a minimum of 0.2 and a maximum of 0.8 units from its neighbor. In Figure 6.15, the Zoids in row A are evenly spaced, the shoulders of the Zoids in row B touch, and the Zoids in row C are at a shoulder-to-shoulder distance range.
187
188
C H A P T E R 6 • Creating Crowd Scenes from a Small Number of Base Models
Figure 6.15: Rows of Zoids that are spaced differently
Additionally, you can select one or more objects, usually locators, before running the script. If you don't select an object, a locator is created at runtime, and the row of characters follows the positive X axis, starting from the world origin. If you select one or more objects, one row for each selected object is created. Each row is of the user-defined length and is placed starting where each object is, consistent with its orientation and scaling. The ability to place Zoids based on selected locators is an extremely useful feature when placing a final large crowd. In the scene s p o r t P a l a c e . m a on the CD-ROM, 18 locators are pre-positioned, waiting to be used by the r o w C r e a t o r . m e l script. Once all systems are go for the creation of the final scene, I select the locators of the side sectors of the bleacher (length 5u), let the script run smoothly for 15 minutes, let it run a second time to take care of the middle sector (length 6u), and voila: our empty scene is automagically populated by 180 colorful Zoids (see Figures 6.17 and 6.18). Here I should mention a couple of significant details. First, the script doesn't actually duplicate the models, but creates references of the file Z o i d _ a n i m a t e d _ g o o d . m b , which contains an animated version of the Zoid. This keeps the file size extremely small, in this case 4MB, but of course doesn't save RAM during render time, when the 3MB references loaded 180 times push the amount of used memory well beyond 500MB. Additionally, if we have to change the animation, it's enough to change the file to which all the references point instead of editing the heavy crowd scene itself. Second, as I mentioned earlier, the scene is intentionally saved as a Maya ASCII file. On a dual 800Mhz PIII with lGB of RAM, the scene with all the characters in it takes about 10 long minutes to load, something you don't really want to do too often. But you might have to use a new animation, leaving the old one, for compatibility reasons with other scenes in
Deploying Three Task Forces
Figure 6.16: Each prepositioned locator will be parent to a row of characters.
189
Figure 6.17: A sky-cam view of the bleacher filled with Zoids
production, with the original name and in its directory. If you have a Maya binary file, the only way to deal with this is to open the scene, take a 10-minute break, and then run a small script to change the filename to which the 180 references point. Using references and a Maya ASCII file, I can instead modify the base animation scene, rename it so that it doesn't overwrite previous versions, and then use the Replace function of my favorite text editor and quickly update my crowd scene. But, hold on! The scene is not quite finished! All references are statically different, their colors and proportions vary, but all their animations are perfectly in synch: they all stand up at the same frame and then sit back down in a perfectly synchronized collective movement. We'll now use the Z o i d _ c r o w d A n i m a t o r . m e l script (on the CD-ROM) to finalize the scene by varying, randomly but logically, the animation of each character. Together with the few props of the basketball court, one hidden object is buried in the scene s p o r t P a l a c e . m a : " p r e v i z G r i d " . Hierarchically located under the p r e v i z _ g r p node, the hidden object is actually a simple NURBS grid with about 200 patches in length and 6 in width. Although 200 is an arbitrary large number, loosely related to the possible random locations of the Zoids, the 6 patches in width are related to the precise number of steps in the seats. Some of the CVs of this linear NURBS surface will be a primitive form of dynamic memory, holding the status tag of a precise Zoid.
A status tag is the name or the Index of the action that the character is currently pursuing, usually mirrored by a specific animation in the animation library.
In our case, only two status tags are necessary: stand up and be seated. The script's first task is to find the closest CV to each Zoid and store its UV coordinates internally, connected with the character name, which is usually in the form Z o i d # # # : Z o i d ; ### is the reference number. The script then runs frame by frame through the entire timeline to check if and
190
C H A P T E R 6 • Creating Crowd Scenes from a Small Number of Base Models
Figure 6.18: The lattice responsible for the Y motion of the grid CVs. The closest CV to a character defines, with its motion, when that character stands up and sits down.
when any of these assigned CVs changes its Y value. This motion is created through a simple lattice deformer in the same group with the grid, running over it and raising the CVs like a wave (see Figure 6.18). When a change in the height of a CV is detected, the frame number is again stored along with the character name and then considered a time-marker of the character switching from seated to standing or vice-versa. Once this pre-processing has finished (in the 180-character version of the scene, 7 to 30 seconds per frame are necessary for this phase), the script flows through the third and last part, where all the information retrieved until now is mixed with a pair of user-provided inputs: the enthusiasm and the reactivity. These two extra attributes (silently added by the script to the character's top node if not previously available) have their values randomly chosen in the ranges requested by the user. The enthusiasm defines how much faster (or slower) the character executes its actions compared with the base animation. For example, a Zoid with an enthusiasm of 2.0 will rise twice as fast as the base animation timing, and a 0.5 value indicates a slow Zoid, taking twice the time to complete the same action. The reactivity decides how much delay there is from the moment the Zoid should begin the action, given by the CV movements, and the moment it actually does so. For example, human reactivity to a stimulus is on average about 0.1 seconds, barely two or three frames, but in a crowd scene I tend to use higher, slightly unrealistic values because the multitude catches most of the attention and it's difficult to spot small differences. The default values provided with this script's interface usually give good results.
Deploying Three Task Forces 191
Figure 6.19: The width of the wave is proportional to the length of the base animation dips, and the front speed is defined by the speed of the hidden lattice.
Last-Minute Changes At the beginning of the chapter, I talked extensively about doing things one way, but being able to modify them. Let's see if we managed to keep that flexibility. To start, once the Zoids are placed, you can move them, both in rows, through their common parent or alone, grabbing each character from its top node. You can change their colors repeatedly, using the Z o i d _ h u e V a r . m e 1 script, but also tweaking the actual shader trees if the change is needed on a specific character. You can also change the size of a character, manually or by running the Z o i d _ s i z e V a r . m e l script. In some cases, this script might produce some interpenetration because the spacing between two closely sitting characters is not recalculated. If you resize manually, the arrival locators for the spineRoot joint and the arms' IK handles might also need some adjustment. You can adjust the animation at any time. You can edit the scene file from which the references are sourced, and you can control the characteristics of the wave of standing Zoids in its overall speed by increasing the speed of the lattice over the grid and individually by lowering the enthusiasm, causing the Zoid to spend more time standing up (see Figure 6.19). You can manually adjust the time offset and scaling of all the clips, but because the characters are actually only references, their curves are locked, and you can't edit them. The only workaround, other than editing the Z o i d _ a n i m a t e d _ g o o d . m b file, is to import the file instead of referencing it.
192
C H A P T E R 6 • Creating Crowd Scenes from a Small Number of Base Models
Figure 6.20: A frame from the final rendering
Always Room for Improvement Figure 6.20 shows a frame from the final rendering, and you'll find two versions of the animation available in QuickTime format on the CD-ROM. But you can always find ways to improve on an animation. Given enough time, you could create new scripts or upgrade those we used in this chapter. In addition, you could do the following: • Optimize the scripts, especially the last one, which can have runtimes of up to 30-40 minutes for 180 characters. • Create a better interface that would bring more hard-coded values of the script that the user could modify and test. • Add a script that varies the colors and creates a palette that the user can modify instead of one that produces purely random colors. In addition, such a script could randomly load different pattern masks from a list of filenames. • Use the methods I've described in this chapter to create radically different characters by varying the model being referenced, giving the scene a burst of variety in the crowd. Kids, women, and elderly all have distinctive features that are unlikely to be pulled out of a single model. • Right now, the characters look only forward; in a real-life wave, the characters would follow the wave motion for their own visual pleasure and thus would know when to
Summary
stand up. Placing an aim constraint on each head joint to an object following the wave front would be a good way to start implementing these actions. Although the standing position, with legs and arms extended, is similar for all characters, the sitting position should be slightly different for each character. You could even subtly animate this position. For example, vary the rest position of feet once in a while, and tilt the torso left/right/forward/backward with a slow frequency.
Summary In this chapter, I demonstrated one of the many ways you can handle a simple crowd scene, hopefully kick-starting or reinforcing your knowledge of the task and the problems inherent in it. First, we modeled, animated, and textured a character. Then we loaded the character as a reference and positioned it in the final scene. Next, we used two scripts to procedurally randomize some of the character's most important features: colors and proportions. Finally, we shifted and stretched procedurally the animation clips, according to the individual enthusiasm and reactivity value of each character. Now it's your turn to put into practice what you've learned. Have fun and do not keep the knowledge for yourself: share it!
193
Taming Chaos: Controlling Dynamics for Use in Plot-Driven Sequences John Kundert-Gibbs and Robert Helms
Intomanyahigh-budgetCG(computergraphics) production—and a number of low-budget ones—comes the need to animate complex, natural-looking events that are either too difficult, too dangerous, or too uncontrollable to film in real life or that need to exist in a stylized, rather than a real, world. Although creating a live-action explosion might be easier than creating a CG explosion, the shot might be dangerous or even impossible to film in the real world. (It's unlikely that the French government would let one blast the Eiffel Tower, for example!) Practical miniatures can be a good substitute for the real items, but they are expensive to build, and the production crew only gets one chance to get the effect right. Even more important, a hands-on art director might want to see a range of different looks for the explosion, which would drive the time and cost out of most productions' budgets. Here is where creating CG simulations of natural (or naturalistic) events comes into play: once a CG simulation is set up, you can change any number of variables that feed into it, alter camera angles, and even violate strict physical accuracy to help "sell" the shot. This chapter and the next will deal with re-creating these events so that they are believable for the audience and under enough control that you can adjust the effect to get a specific look, even if it's not strictly accurate compared with its real-world counterpart. Chapter 8 will deal with particle effects akin to the explosion mentioned earlier. In this chapter, we will deal
196
C H A P T E R 7 • Taming Chaos:Controlling Dynamics for Use in Plot-Driven Sequences
with smaller numbers of larger bodies with unvarying shape (rigid bodies in Maya's parlance) and discuss how to get these objects to do just what your director—or you yourself— want them to do.
When To Use Rigid Body Simulations It is not always necessary, or even advantageous, to use rigid body simulations in a CG shot. If you are dealing with very small numbers of regular objects, you may have better luck simply keyframing the animation. For example, if you want a basketball player to dribble twice and drive for a lay-up at the basket, keyframing might well be the way to go. Although keyframing bounces and the like is a challenge, you can probably produce the animation more quickly using keyframes, and if (when!) changes are necessary, you can make them quickly and intuitively, whereas with dynamics you will bang your head against a bunch of stubborn numeric variables that don't affect the animation as they "should." When you need to deal with complex shapes, such as shards of glass, or even larger numbers of simple objects, such as pool balls, however, a well-thought-out dynamics simulation may be the way to go. In either of these examples—which we will cover in detail in this chapter—and a number of other related examples, keyframing would be an onerous task. Given the complexity of interactions of the various elements, it would be extremely difficult to get a realistic look or even to change parts of the animation effectively. Maya, as well as a number of other top animation programs, makes the task of creating a basic dynamics simulation of even complex objects fairly easy. The problem that many of us run into, however, is how to control the basic "first draft" motion Maya calculates so that the simulation data will work within the plot segment given by the story and will gel with the general look and feel of the production. This must be true whether the effect is to be an "invisible" CG effect in a live-action film or a stylized effect in some sort of animated piece.
The Matrix: Creating a Stylized Reality Consider an example of stylized reality to illustrate how an effect has to gel with the general feel of a production: the scene in The Matrix when a helicopter smashes into a glass skyscraper. Rather than a photorealistic explosion, the directors wanted to show the artifice of the "matrix" reality of latter-day twentieth century, so the helicopter first warps the glass of the building before exploding and falling to the ground. Here, violating strict physical accuracy was key in the larger goal of the film as a whole, and thus the artists' ability to create a stylized effect was a necessary ingredient to telling the story through visual effects.
The Setup: Plan, Plan, Plan If you've read through other chapters in this book, you've probably noticed the repeated emphasis on planning, and with simulations the situation is no different: a clear, concise knowledge of what you need to accomplish and why will make your effects more clear and effective. A lack of careful thought at the beginning will almost surely result in lost time or in a poor or muddled shot. If you are working on a large-scale production, in this first stage you will collaborate with a number of other people, from director to storyboard artist to live-action DP (Director of Photography) to character animators to technical directors (probably you!). If you are
Preproduction: Research and Development
working on a solo project, you will only need to commune with yourself. In either case, it's extremely important to thoroughly talk (or think) through the effect, down to very small details, asking why it needs to be done for plot reasons, why it should be done in CG instead of via another method, and how the shot will be executed. At the same time, a storyboard artist should be rendering their ideas of what the shot sequence should look like, and these boards should be edited and then used as a guide for the final shot. Even if you have to draw stick figures yourself, a few hours using pencil and paper will save you time later in production. Additionally, you can study archival footage of effects shots from previous similar productions to help visualize the action of the shot. Referencing similar shots is always helpful in explaining and understanding the ingredients of a current shot. If the shot needs to be composited over live-action plates, be sure to discuss how shot information will be communicated between filming and effects people. Finally, don't ever forget to dig in and discuss the practicalities of a shot: how many people will be used on the shot, how much time will be allotted to the work, and how much extra research and development money is available for equipment, reference footage, and field trips to study the real thing. It's far better to have an honest idea of the realities of the situation—even if it's "you have 3 weeks, two people, and $500 to do this shot"—than to go in thinking you have many more resources than you actually do. The planning stage is all about one thing: communication. If your team (or you) can communicate efficiently and effectively at this stage, this work will establish open channels of comniunication throughout the later stages of production, and that can only help in the end. Nothing is worse than a production in which small groups of people restrict information to themselves, and not much is better than a collection of artists all working openly toward a creative goal.
Preproduction: Research and Development Once initial planning is done, it's time to get to work. Unless you are doing a simple shot or have done almost exactly the same simulation before, you will need to take the time to research how to do the simulation and develop the means to do so. This stage might range from a few hours for a single person trying out different gravity and bounce settings to get a bouncing ball to act like a super ball, to several months for a large team of people to build, fly, and crash prototype pod racers (see the image on the first page of this part). The process, though, is similar in either case: create a simplified version of your dynamics simulation, and test various methods for getting the look and control that you will need for the final shot. In some cases, a simplified simulation will entail using fewer objects to get a feel for the whole (testing one or two super balls when the final shot calls for several hundred to bounce around). In other cases, it will require using simple stand-in models for those to be used later (colliding simple spheres as stand-ins for complex asteroid shapes to be used later), and in many cases, both simplifications will be necessary. These simplifications will not only increase interactivity with the program, but will make it far easier to see how the changes you make affect the scene—something that can be extremely difficult to get a handle on when all simulation elements are combined. Once a decent simplified version of the simulation is created, you or your team can test it under a number of conditions and determine if Maya's built-in dynamics system alone can handle the simulation or if some in-house plug-in will need to be written to enhance what Maya has to offer. Of course if you are working on a small-scale production, the time and expense of having someone write plug-ins may be out of the question, so you may have to get
197
198
C H A P T E R 7 • Taming Chaos:Controlling Dynamics for Use in Plot-Driven Sequences
as close as you can using what Maya gives you out of the box (which is pretty darned good anyway!) or perhaps write a few expressions or MEL scripts rather than full-blown plug-ins. In the beginning of the R&D stage, it is a good idea to break a simulation into one or more simple elements. By concentrating on one element of a simulation (a single ball or one shard), you can really dial in just the right settings to get that element to behave itself in an environment that allows speedy interaction because the simulation is simple and thus easy for Maya to calculate. Because the situation is simplified, you'll also get a better understanding of how adjusting individual settings affects the behavior of the system, which will help you later in the process. Once you have the individual elements working well, begin "stacking" them together, one after another. As you add each new element, you will invariably run into new problems (or should we say "challenges"?) as the elements interact in unexpected ways. This stage is probably the most difficult, time consuming, and frustrating of the entire R&D cycle—even of the process as a whole—so be prepared to spend a good deal of your time "budget" on this stage. It can take many a late night and weekend workday to get the various parts of a simulation working in harmony and in a fashion so that you at least think you understand enough about the system to make intelligent guesses as to how the system will behave when you adjust the settings.
Working on Multiple Machines Maya's dynamics engine is extremely sensitive to its environment. Thus, even changing from one computer to an identical one next to it may wreak havoc on your carefully crafted simulation (apparentlythis has something to do with floating point round-off error). lf possible, try to do your whole R&D cycle and your final render on the same machine, lf the simulation will be too complex to run on a single machine, or if you are working with several other people, be sure to run your simulations from the start using a\\ the machines you expect to use throughout the process. By testing the simulation on multiple machines from the start, at least you'll know right away if you are going to have problems later on. One bright spot is that more mature versions of Maya (3.0 and higher) seem to have much better consistency when working on multiple machines. Still, it is a good idea to test simulations on multiple machines if you plan or need to use them during the R&D or production phases.
As you finally get things working together, you can start running full-fledged test simulations to see if what you've built will actually stand up to the requirements of the shot. If not, it's back to the drawing board to figure out what went wrong. If you're lucky, a few small changes will fix the problem. In the worst case, you've gone about creating this simulation the wrong way entirely, and you need to go back to the early stages of the R&D cycle and come up with a different method for creating the simulation. Although we'd like to say this latter case rarely happens, it's actually rather common, so be sure to budget in a safety margin in case things go hopelessly wrong just when you think you've got everything working. If and when you do get all the elements working together, it's time to move on to the real thing and try to render some usable frames!
Production: Getting the "Shot" If all has gone well in the R&D phase, the actual production phase may go relatively quickly—except for simulation and render times, which always seem to balloon to ridiculous
Integration and Postproduction: Cheating the Shot So It Works
proportions when actual scene data is used! It's not a bad idea to keep a few—or few hundred—extra CPUs around to help with the chore. At this stage, you will need to put your real models into the simulation (if you haven't already), and you will need to place all the elements needed for the simulation into one scene. Obviously with large numbers of more complex models, interactivity and simulation speed will be slow indeed, so the better you have prepared in the R&D phase, the faster you will be able to produce the final shot. Generally, the fewer times you have to run the simulation in it's final form to get it right, the better, as these simulations may take up to several days to compute. One very good way to speed things up at this stage is to place groups of rigid bodies in their own dynamics layers. In Maya, you do this by either creating multiple Rigid Body Solvers for the separate elements to live in or by placing rigid bodies on separate collision layers. When a group of rigid bodies is in a separate layer, none of its constituents will "see" rigid bodies in other layers, and thus Maya doesn't have to do as many collision calculations, which can speed things up a great deal. Obviously if you have a simulation in which all the elements have to interact (pins being struck by a bowling ball, for example), you can't use this trick, but if your scene consists of multiple groups of objects that don't interact with other groups, this trick can save you a great deal of time during final simulation work. Another tip to speed up production work is to cache or even bake your simulation data prior to rendering. Caching render data saves each frame's dynamics state to a file that can be accessed later at much faster speed than the original calculations took. Baking a simulation actually turns the dynamics into keyframe data, making the results extremely portable and allowing for fine adjustments to the actual keyframes to tweak a shot "just so." Also, if you are having problems with different machines producing different simulations (see the earlier sidebar), or if you want to use multiple machines for rendering, allowing one computer to create the simulation and then caching or baking it will ensure that the simulation is correct and available to all rendering CPUs simultaneously. Maya's renderer(or any other we know of) cannot efficiently render dynamics simulation data on multiple CPUs (that is, distributed rendering). Because each frame of a simulation depends for input data on the frame that came before, each separate CPU has to "run up" the simulation through every frame that precedes the one it has been told to render. Thus, if one CPU needs to render frame 2000 of a simulation, it will need to calculate the previous 2000 frames of the simulation before it can even begin rendering that frame. Obviously, this is a huge waste of computing resources. Additionally, each machine may in fact calculate the simulation differently from the others, so the resultant frames may be useless when you finally do get them! Using either the baked simulation or caching methods described here will resolve the problem.
Integration and Postproduction: Cheating the Shot So It Works Now that you have your simulation data completed, it's time to do final renders, cut them into your film, or composite them with other layers. As you work through this stage, one thing will all too often become painfully obvious: there are still problems with the shot! If these problems are large (bowling pins shooting through the floor as they are struck), something has gone
199
200
C H A P T E R 7 • Taming Chaos: Controlling Dynamics for Use in Plot-Driven Sequences
Figure 7.1: Glass shards stuck in a rough dinosaur model—obviously it's time to rework the simulation.
terribly wrong somewhere in the production pipeline, and it's time to go back and see what happened (see Figure 7.1). If, on the other hand, the problems are small, such as the sharp edge of an object piercing another for a couple of frames, a background plate showing through a rendered object, or specular highlights washing out in places, don't panic. There's no need to go back and expend the time and energy to get these details right in the original simulation and renders. You can use a range of postproduction tools to "cheat" your way to the perfect look. If your problem area really is just a frame or two, extracting those frames into a program such as Photoshop and cloning or airbrushing away the problem spots is an efficient solution. If the problems are over more frames or are global to the shot itself (such as poor color matching with the background plate), any number of compositing packages—from After Effects to Tremor to proprietary software—have all sorts of matting, color correction, and other tools that you can use to get rid of these nasty problems. If you thought ahead and rendered your scene elements in separate layers, you can even adjust individual elements of your shot to put that extra bit of polish on it. In general, when a problem gets below a certain threshold, our mantra is always "fix it in post!" There's really no reason to spend dozens or hundreds of hours to get things absolutely perfect in your renders if the problem can be fixed in minutes with a compositing package. After all, it's what your audience sees that counts, not how you got there! Now that we've covered the basic pipeline of how to create a dynamic simulation shot, we'll present two example shots that we recently worked on in order to give you a more
Working Example 1: A Quasi-Deterministic Pool Break
hands-on feel for how to put all this theory into practice. First, we'll go over how to get 16 pool balls to end up where they need to after being broken by the cue ball, and then we'll discuss how to create convincing shattering glass effects to be composited over a live-action background plate.
Working Example 1: A Quasi-Deterministic Pool Break The story is simple enough: Two robots are playing pool, and one of them makes a number of increasingly difficult shots to sink the balls. First, however, the other robot needs to break the pack of balls so that they end up in positions called for by the storyboards. Although creating a pool break isn't extremely difficult (it's not as easy as one might first imagine, however!), getting the balls to end up in the right places will require some trickery and a good deal of careful adjustment to the basic pool-break simulation. This production is a solo animation short by one of the authors.
Preproduction: Planning the Shot We must initially ask and answer a few questions to inform the rest of the production cycle. Why is this break necessary? To set up the one robot's later run of the table. Will this animation be photo-realistic or stylized? It will be a somewhat stylized animation (have you ever seen two robots playing pool?!), but the pool balls should move in a realistic fashion. Will this be a foregrounded (or "hero") shot? Yes, indeed; about 8-10 seconds will be devoted to a medium shot of the balls moving around the table. Will any balls be sunk on the break? No. The robot who doesn't break is the one who will run the table, so no balls can be sunk on the break. Will there be room to cheat the position of balls? Yes, to some extent. There will be immediate cutaways to the robots after the break, so the balls can be adjusted a slight amount without the audience becoming aware. Does the position of every ball matter equally? No. The table run will be on the striped balls, so the solid balls (except for the eight ball, which must be sunk, and one solid ball, which needs to be jumped) just need to be out of the way of each shot. How flexible is the positioning of the balls? There is some flexibility to the positioning, as long as the same basic shot is available that is required in the script. These answers are important to later production work: the balls need to break convincingly (as if filmed by a real camera), but can be moved around a bit after the break—especially if the cutaway to the robots happens before they've all stopped rolling. Also, we care most about the position of ten balls—the stripes, the eight ball, the one solid ball, and the cue ball—
201
202
C H A P T E R 7 • Taming Chaos: Controlling Dynamics for Use in Plot-Driven Sequences
so the other five can go just about anywhere, making the requirements of the simulation somewhat less rigorous (still, ten balls ending up in just the right place won't be easy!). Finally, the robot has to sink a striped ball on each shot; it doesn't matter which striped ball he sinks when, so as long as any of the seven stripes end up in the right position, all is well. Once we have our basic questions answered, it's time to look at the storyboards. Although the boards are explicit about ball position, this information is spread out over dozens of individual boards. The first task, then, is to take all these boards and create from them a single diagram of where each ball needs to end up after the break (as well as the trajectory of each shot), as shown in Figure 7.2. Finally, we recorded some video of actual pool breaks. We also went on a research expedition to the Internet and found some physics experiments involving Figure 7.2: A diagram of where each pool ball needs to end pool balls that provide information about the energy up after the break transfer of collisions between balls, and between balls and the pool table's bumpers (ball collisions are about 90 percent efficient, while those between ball and bumper are more like 50 to 60 percent efficient). The footage and energy transfer numbers will be useful as starting points for setting up our simulation—though obviously we'll deviate as necessary to get the right look and feel for our particular break. That about wraps up the planning stage. Now it's time to build a rig and get the simulation working.
Research and Development The first step in development is to build a rigid body "table" with which the pool balls will interact (the balls are simple spheres with diameters of2.25 inches). Although the table in Figure 7.3 is a good model for rendering, it is far too complex to use as a rigid body collision object. Every face or patch an object has requires another calculation on Maya's part, so complex objects such as this table slow simulations to a crawl. Additionally, the table is not flexible in a production sense, because the model has to remain as-is to look like a pool table, but the rigid body table may need to be moved slightly or adjusted in other ways to create a convincing simulation. For these reasons, we built a collision table out of one-patch NURBS planes, and we built the pockets out of hexagonal polygonal cylinders that had their front two faces removed, as shown in Figure 7.4. The surface of a pool table is in a 2:1 ratio of length to depth. Thus, if you build your table 80 units long, it should be 40 units deep for accurate reproduction of the way a real pool table works. See r i g i d T a b l e S t a r t . m b on the CD-ROM for an example.
Working Example 1: A Quasi-Deterministic Pool Break
Figure 7.3: A pool table suitable for texturing, but too complex for rigid body collisions Notice that, as shown in Figure 7.5, the hexagonal polygon pockets do not match the higher-resolution rounded shape of the rendered pockets. On a large scale, however, this discrepancy is small and therefore unlikely to be seen. We thus decide that the advantage of faster simulation times outweighs the slight inaccuracy in collisions with pockets. If it ever becomes obvious that the collisions are not correct, we can go back and adjust the number of polygons in these cylinders (using the Split Polygon or Smooth tools) to add more detail. lf you are following along with this example and building a rigid body collision surface, please read on before you construct your surface. This iteration of the table's shape has some deficiencies that need to be corrected.
When we get to actually sinking the pool balls in these pockets, the rigid body simulation will break down as the normals of the cylinder are pointing outward rather than inward, (if normals don't point toward collisions rather than away from them, problems ensue.) To resolve this problem, you can first display the cylinders' normals by opening the Mesh Component Display section for the pCylinderShape in the Attribute Editor and turning on the Display Normals option. Then choose Edit Polygons Normals Reverse to reverse the direction of the normals.
203
204
C H A P T E R 7 • Taming Chaos: Controlling Dynamics for Use in Plot-Driven Sequences
Figure 7.4: A rigid body table constructed from simple planes and cylinders
Figure 7.5: Detail showing the simplified rigid body pocket
against the more detailed pocket shape that will actually render
When we created our planes and cylinders, we made sure that none of them were actually intersecting any others. Because these objects will all be passive rigid bodies, it probably wouldn't be a problem if they touched, but a small amount of space between objects reduces the chances of interpenetration errors and, we think, speeds up calculations just a bit, so we left a small amount of space between each surface and its neighbors. Although leaving space between the walls and pockets of our rigid body surface is probably a good idea, it is imperative that we do so between each pool ball and the surface of the table, as well as between each ball and its neighbor. As shown in Figure 7.6, we placed all the balls just a tiny bit above the surface of the table. If we do not separate all these active rigid bodies, we will end up with rigid body interpenetration errors (the bane of rigid body simulations!), and the simulation will break down immediately. Once we have all our shapes in order, we can simply select all the table elements, and then select the balls and create active rigid bodies (choose Soft/Rigid Bodies Create Active Rigid Body). Then, with all the balls selected, we create a default gravity field (choose Fields Gravity). Now we need to fix the problem of the pool balls floating above the table. With each of the pool balls still selected, go to the Channel Box and change the rigid body bounciness to 0, and change damping, static, and dynamic friction to 10 (the maximum allowed). Next select the table surface and make the same changes in the Channel Box. As gravity pulls the balls down to the table, they will stick to its surface like glue, and after some time they will come to complete rest on the surface itself, which is what we want. Now play back the animation until all the balls are completely still (this may take several hundred frames, because they will likely rock back and forth slightly for a while) and then stop, but don't rewind the animation. Here is how we want the balls to be initially, so, in the Dynamics menu set, choose Solvers Initial State Set For All Dynamic. This resets all dynamics calculations to the current state so that when you rewind the animation, the balls will remain in their current state. A sample scene on the CD-ROM ( r i g i d T a b l e S t a r t . m b ) contains a rigid body table at this stage of the simulation process. As you play back your simulation, be sure your playback rate in the Timeline Preferences is set to Free (choose Window Settings/Preferences Preferences, and then choose Timeline from the Settings submenu). Because Maya's dynamics calculations require the state
Working Example 1: A Quasi-Deterministic Pool Break
Figure 7.6: Detail showing that the pool balls start off "floating" above the surface of the table
205
Figure 7.7: The cue ball rebounding off the surface of the table
of each previous frame, the simulation can go berserk if any are skipped (which can happen when playback is locked to a certain number of frames per second), and you can end up with very strange results. Now we can actually get to work animating the balls! First, to simplify things, we change all the balls except the cue ball to passive rigid bodies (which means they are immovable) and start playing with initial velocity and rotation settings to get something like an appropriate motion out of the cue ball. While working on this initial setup, we immediately run into a problem that will be a plague throughout the research cycle. When the pool ball strikes the rear bumper (after ricocheting off of the pool balls), its rotational motion causes it to "climb" the side of the bumper and fling itself into the air as it rebounds, as shown in Figure 7.7. Although an interesting effect, this is not what pool balls commonly do when rolling around the table. (Yes, they occasionally do bounce off the table, but this is not the effect we're after, so we need to control it.) We will rarely present actual parameter numbers (such as dynamic friction) used during this discussion, because the fine-tuning of each of these settings is highly dependent on your setup. Thus, it makes more sense for us to present the strategies we use, rather than the results thereof.
We decided to adjust initial velocity and rotation settings rather than create impulses for the cue ball to help us better understand the simulation. It is difficult enough to get a good simulation by directly plugging in speed and rotation values, and all the more difficult when trying to keyframe different impulse values to create initial motion. Because a pool cue striking a cue ball is almost an instantaneous effect, we felt that using these initial velocity and rotation settings would work in the final animation, and thus working with impulse values would needlessly complicate our simulation work.
206
C H A P T E R 7 • Taming Chaos: Controlling Dynamics for Use in Plot-Driven Sequences
Figure 7.8: The cue ball sticking inside the back bumper, an error caused by interpenetration oft he two rigid bodies
Figure 7.9: With multiple balls and higher velocities, interpenetration errors and balls jumping offt he table become a pervasive problem.
To resolve this problem, we can try reducing both static and dynamic friction of the bumpers to 0 and set gravity to a much higher amount, such as 50 instead of its default 9.8. We also find we have to lower the friction of the cue ball and table surface to fairly low numbers because table friction is partially responsible for the rotational speed of the ball. We need damping (a setting that causes an exponential falloff of motion so objects will settle down to rest in a simulation) at a small decimal number such as 0.1 or 0.2, or else the ball will never come to complete rest. Finally, as shown in Figure 7.8, we quickly find that we are getting interpenetration errors when the cue ball strikes the bumpers at the high velocity it needs to travel in order to look like a convincing pool shot. Sometimes this error is just noted by Maya, but other times it causes the ball to "stick" inside a bumper and refuse to move anymore—a definite breakdown of the simulation! Because of the way Maya's dynamics engine calculates friction, bounciness, and damping—it multiplies the settings of the two colliding objects together—you initially need to adjust settings of these attributes in tandem. In our case, this means adjusting the static friction, say, of both pool balls and table surface to a higher or lower value at the same time. When it is time for more refined adjustments of these settings (that is, when the simulation is close to correct), the objects can be adjusted separately to tweak exactly how they behave.
Undeterred by our initial problems, we move forward and reset all our pool balls to active rigid bodies and rerun the simulation. First, we find that the pool ball's initial velocity has to be set much higher than it had been when it was the only active rigid body, or else the balls don't scatter well. As we increase the velocity of the cue ball, and experiment with rotational speeds as well, we find that our initial problems of interpenetration errors and balls ricocheting up into the air are, of course, multiplied by the number of balls and the higher energy being imparted to them, as shown in Figure 7.9. After a bit of tweaking and reducing friction to very low values, we do finally get a break that is somewhat convincing, as shown in Figure 7.10 (an animated version, b r e a k T a k e l . m o v , is available on the CD). However, we still have two nasty problems. First, to keep balls from jumping up, we had to reduce their rotational speed and thus had to reduce friction across the board. This seems fine until one looks at the animation: the balls appear to "skate" across the surface of the table rather than rolling as they should. Second, the simulation is very sensitive to initial conditions, breaking down (interpenetration errors and jump-
Working Example 1: A Quasi-Deterministic Pool Break 207
Figure 7.10: A decent run of the break simulation (see the full animation on the CD) ing balls) if initial conditions change slightly. This second problem is especially insidious, because we need a great deal of fine control over the simulation so that we can get the balls to end up where they need to go. To try to resolve these issues, we raise the friction levels (dynamic more than static, because we want the balls to roll more freely once they are moving slowly), increase the initial velocity of the cue ball even more to compensate for the added friction, and reduce the step size in the rigidSolver settings window to a much smaller number, such as 0.005 or 0.001. This last adjustment, while helping with the interpenetration errors, really slows simulation time, because Maya has to do a great many more calculations per frame of animation. The rigidSolver Step Size setting (available by choosing Solvers Rigid Body Solver from the Dynamics menu set) adjusts how many times per second Maya "looks" at the state of all its rigid bodies.The smaller this step size, the more times per second Maya has to run rigid body calculations, resulting in more accurate collision detection at the cost of slower simulation speeds. The default step size of0.03 seconds is slightly smaller than once per frame (0.04 seconds). Reducing the step size to 0.001 —the smallest value it can be—forces Maya to check the state of all rigid bodies about 42 times per frame, which results in much slower playback.
After many more hours of tweaking numbers and watching slow playbacks, we arrive at a second take for the break, a still of which is shown in Figure 7.11 (the full animation, b r e a k T a k e 2 . m o v , is on the CD). We now have a fairly convincing pool ball break, but it's far from controllable. If all we needed was a convincing break, but didn't care about where the
208
C H A P T E R 7 • Taming Chaos: Controlling Dynamics for Use in Plot-Driven Sequences
Figure 7.11: A second take at the break simulation (see the full animation on the CD)
Figure 7.12: A revised version of the rigid body pool "table"
balls ended up, we could probably stop here. Because we need more control over the simulation, however, we need to keep plugging away. After some more experimentation (if you call five hours of adjusting numbers and swearing at the terrible and slow simulations "experimentation"!), we finally have a brainstorm and decide to think inside the box. First, we note that the interpenetration errors do not occur between balls, so we realize that by replacing our bumper planes with simple polygonal cubes, which have depth as well as height, we might get rid of those nasty errors. At the same time, we realize that by placing a frictionless, bounceless rigid body cube above the pool balls (not touching them, of course, but just slightly above them), we could constrain the balls to stay on the surface of the table, getting rid of the biggest problem we have: the jumping pool balls. Figure 7.12 shows our revised rigid body table, with the top cube visible, though we have it set to invisible normally so we can see the actual pool balls. This new version of the rigid body table proves to be much more robust, allowing us to experiment freely with minor adjustments to the initial settings of the cue ball without the simulation breaking down or balls jumping off the table. Just as important, we can now set our step size higher again (around 0.01 to 0.005), which makes simulations play back much faster and thus allows for quicker experimentation. Now that we have things running fairly smoothly and can experiment more freely with the settings, we can move into the production phase of the process and try to get all the balls to end up where they should be after the break is finished.
Production: Simulating, Baking, and Adjusting to Create the Break Pattern Here's where we earn the big bucks, so to speak: time to create the actual break that will be used in the animation. Because we have already used all the objects needed for the simulation during the R&D phase, we only need to get the initial settings right to create the proper pattern at the end. Doing so, of course, is no easy feat, but now that our simulation is mostly well behaved, we have the luxury of experimenting quickly with various settings and honing in on our target pattern. Our goal here is to create exactly the pattern indicated in Figure 7.2, but we know that we can be off somewhat (as long as the balls end up close enough that they require the same shot) and also that we can move one or two balls if needed during the cutaway shots that immediately follow the break.
Working Example 1: A Quasi-Deterministic Pool Break
209
At this point, there's nothing left but to sit and play around with numbers for as long as it takes to get the simulation to produce the right pattern. Although this isn't a very exciting or sexy part of the job, it's the necessary final step to creating the simulation called for in the script and boards and thus has to be done, however long it takes. At the same time, we constantly need to ask, Is this good enough? In other words, if we get close enough to the requested pattern, we may not need to spend the extra time and energy to get it perfect (and perhaps the perfect pattern may not be possible—who knows). This is a production, after all, so if it's convincing to the audience, it serves its purpose, and more time spent on it is wasted. Figure 7.13: A third take at the break. Notice that the After a good deal more experimentation with spread here is much closer to the diagram laid out in Figinitial velocity and rotation settings for the cue ball ure 7.2. (See the full animation on the CD.) (friction and bounciness settings seem to be close enough at this point), we come up with something that is close to production-ready, a frame of which is shown in Figure 7.13 (the animation, b r e a k T a k e 3 . m o v , is on the CD). The layout of balls in this setup is not quite correct, mostly because there is a crowd of balls which surrounds the 8 ball and which will make the later shots difficult to stage—however, this setup is close enough that we can try baking the animation and see if we can manually force some of the balls to move a bit without being noticed. A Maya binary file ( r i g i d T a b l e F i n a l . m b ) of this simulation is on the CD-ROM. There Is an additional error in this version of the break: the cue ball ricochets off the side bumper and travels backward at high speed due to Its backspin. This change of direction would be appropriate if the bumpers had a rubber surface, but not likely when they are covered in felt. On doing some audience research, however, we determined that one has to play back the animation at about two frames per second before anyone is even aware of the problem. Even then It doesn't seem to bother anyone because of the chaos of balls bouncing around, so we decided to let this stay In the animation.
The question is, is this close enough for the shot's needs? To determine this, we first bake the simulation (choose Edit Keys Bake Simulation) and then play with the resultant keyframed animation to see if we can cheat balls into place without the shot looking unnatural.
210
C H A P T E R 7 • Taming Chaos: Controlling Dynamics for Use in Plot-Driven Sequences
Once the baking is finished, we're left with a whole mess of keyframes, as Figure 7.14 shows, for translation and rotation channels of each of the 16 balls. If we wish, we can eliminate any final small-scale bouncing the balls do by clearing all keyframes on the translateY channels, but we actually like the smallish bounces the balls make, so we leave them. Some balls are still rotating slightly at the end of the simulation, so we keyframe a gradual slowing to no rotation for them. To speed up playback after the animation is baked, you can remove all rigid bodies from their respective balls, thus eliminating any leftover calculations Maya needs to perform(choose Edit Delete by Type Rigid Bodies).
The basic strategy for subtly adjusting the simulation to get balls into just the right place is as follows. 1. Set up your windows so that the top view is in the top pane and the Graph Editor is on the bottom (see Figure 7.15). 2. Select a ball that's in the wrong position, select its translateZ or translateX attribute in the Graph Editor, and then look to see where the last "bump" in the curve lies (this is the point of the last collision of the ball with another object). (See Figure 7.15.)
Figure 7.14: The Graph Editor showing the mess of keyframes left on the cue ball after baking the simulation
Figure 7.15: Top and Graph Editor panes, showing a ball's translateZ. attribute selected
Working Example 1: A Quasi-Deterministic Pool Break
211
3. Select all the keyframes after this point, and then place the play head on the last frame of the timeline (so that the top view shows where all the balls end up). (See Figure 7.16.) 4. Select the Move tool, and then, in the Graph Editor pane, drag up or down until the ball ends up in the right position for the final frame (this ought to be close to its former position, or the discontinuity will be noticeable in the animation). You will now have a discontinuity between the selected and unselected frames, as shown in Figure 7.17. 5. Select a range of frames just after this discontinuity, and delete them, creating a smoother curve between the two sets of keyframes (you need to delete enough frames to create a smooth curve without noticeable jumps in animation, but not so many that the ball stops unnaturally at the end of the animation when everything's moving slowly). (See Figure 7.18.) 6. Test your new animation by scrubbing in the timeline and play blasting the animation. Pay special attention to whether the ball obviously changes direction or speeds up or slows down unnaturally, and go back and correct if necessary. Also pay attention to whether this new animation path causes the ball to pass through another ball! 7. If you decide the ball stops too abruptly as it picks up its simulation keyframes again, add a keyframe or two to help smooth the transition, as shown in Figure 7.19. That's about it for doing the correction, though it becomes something of an art to make these adjustments subtle enough not to be noticed. If any balls are too far away to be moved during the actual break, we can try to move them during the cutaway to the robots just after the break. People have an amazingly limited awareness of continuity for chaotic arrangements of objects, so we should be able to get away with anything we need to at that stage. (We'll have to do this with the one ball, as it just doesn't move fast enough for us to move it the great distance it needs to go to get away from the eight ball.) After all the adjustments, we finally arrive at a usable "simulation," shown in Figure 7.20, that we can integrate into our animation (see b r e a k T a k e 4 . m o v on the CD for the animation— and watch how the balls have been moved versus b r e a k T a k e 3 . m o v ) . Once we complete this stage and are satisfied with the results, we are ready to integrate the simulation into the rest of the animation and to texture and light the shot for final rendering.
Figure 7.16: Later keyframes selected in the Graph Editor
Figure 7.17: Detail showing a discontinuity between selected and unselected frames
Figure 7.18: The animation curve in the Graph Editor after deleting several keyframes
Figure 7.19: Detail showing an added keyframe to help smooth an otherwise too abrupt velocity change
212
C H A P T E R 7 • Taming Chaos: Controlling Dynamics for Use in Plot-Driven Sequences
Figure 7.20: Final positions of the pool balls after being manually adjusted
Integration: Getting the Animation Ready for Render Because this portion of the shot doesn't have much to do with simulation work, we'll be brief here. Essentially this is the point at which all the other work gets done. We need to place final textures on the balls (now that we know where each one needs to end up!), light the scene correctly, and move the break set of keyframes into the proper position so that when the one robot thrusts the pool cue forward and "strikes" the ball, the baked simulation begins. To get the appropriate speed and angle for the initial cue ball motion, we need to adjust the speed of the pool cue to match, which takes a bit of work, but ends up not being too much trouble (if it were, a clever cut at the moment of impact with the ball could cover this!). Once all the integration is finished, the shot gets lit, any necessary render passes are rendered, and the whole thing gets put together in a compositing package, resulting in the more-or-less completed shot shown in Figure 7.21 (the complete animation sequence, b r e a k F i n a l . m o v , is on the CD).
Working Example 2: Shattering Glass Over a Live-Action Plate This example is from a live-action/animation short that involves an escaping carnivorous dinosaur. The dinosaur runs through a set of plate glass windows in the front of a large building. Budgetary constraints (and the fact that we weren't allowed to destroy the facade of a newly constructed building!) dictated the use of computer animation to achieve the
Working Example 2: Shattering Glass Over a Live-Action Plate
Figure 7.21: The final version of the break, including lighting and texturing (see the full animation on the CD)
effect desired. Figure 7.22 is a snapshot of the front of the building, which includes the elements which need to be modeled. The problem of convincingly shattering a large amount of glass is a bit different from a game of pool. Fortunately, we do not need to worry about the final position of any particular shard of glass. Unfortunately, there are a large number of shards, with unique shapes and sizes, that will behave differently during collisions, which presents a particular challenge for the computer's processor. The larger pieces of glass will also need to shatter upon their impact with other objects. This secondary shatter is complicated by the fact that we are using dynamics. Preproduction: Planning the Shot As always, it is important to think about the reasons for a scene before diving in to work. Why is this scene necessary? This scene is necessary to heighten the tension developed by the escape of a carnivorous dinosaur. It is being done with computer animation because, in reality, it is impossible and because models are both too difficult and expensive to achieve within the time and budget allotted. What style of shot is this? This must be a very photo-realistic shot, because computer generated elements will be composited onto a background taken from a live-action filming session. It is broken down into two perspectives: a near side view and a long front view. By breaking the sequence up like this, we do not give the viewer time to rest
213
214
C H A P T E R 7 • Taming Chaos: Controlling Dynamics for Use in Plot-Driven Sequences
Figure 7.22: Photo of the building facade their eyes on any one detail. Avoiding long scrutiny of the shot by the audience will assist the illusion of reality. Is there room to cheat the shot? Yes and no. The shot can be cheated at the cut from side view to front view, but any cheating will be highly time intensive due to the large number of objects that may need to be modified. It is best to get this one just about right from the outset. Why use dynamics? Because it is the only animation method that fits production time and provides the level of realism desired. When dealing with a single irregular object or a smaller number of regular objects, keyframing is probably faster and more accurate than dynamics. This project deals with hundreds of irregular objects, which react not only to the ground, but also to each other. These questions show that we have little wriggle room in this animation. The resulting shatter must be as real as we can possibly make it. The complexity of the situation prevents us from easily fudging the position of any objects. The use of dynamics techniques removes most primary controls from our toolkit. We are left using dynamics elements, some restricted keyframing, and expressions to control this chaotic mess of a scene. On our side, we have the audience's propensity to overlook minor incongruities. The speed and complexity of this portion of the animation (about two seconds) are also great advantages. Because the animation goes by so quickly and because there are so many objects, it is extremely unlikely that anyone will notice two pieces of glass intersecting for one frame. The audience's distance from any one element of the animation also helps us avoid scrutiny. For inspiration during the creation of this shot, we are forced by our nonexistent budget to go watch a bunch of action flicks with explosions and shattering. Poor us.
Working Example 2: Shattering Glass Over a Live-Action Plate
21 5
Research and Development Because the primary challenge in this case comes from the complexity of the simulation, research is done using only a single pane of glass and a simplified scene. This speeds up interactivity of the simulation greatly and avoids the temptation to do unnecessary work we would only throw away later. Of course, before a pane of glass can be broken, it must exist. The first step in any animation is modeling. Careful measurements of the architectural elements to be modeled are taken in conjunction with the camera position during a live shoot to construct a model of the building facade. The model is built using a digital picture of the building as a back plane for easy reference. From prior experience, it is clear that Figure 7.23: Completed model of the facade this scene will be difficult to interact with once dynamics are added. For this reason, not to mention being a generally good idea, the model is kept as light on geometry as possible. Unfortunately, Maya's dynamics solver does not support subdivision surfaces, so all model construction must be done using NURBS and polygons. Figure 7.23 shows the completed model of the building fagade. We considered several methods for creating shattered glass. The two with the most potential are a physically correct simulation published at SIGGRAPH a few years back ( w w w . g v u . g a t e c h . e d u / a n i m a t i o n / P a p e r s / o b r i e n : 1 9 9 9 : G M A . p d f ) and the far less accurate shatter effects built into Maya. Due to time restrictions, we trash the physically correct shatter simulation and start working to make Maya's shatter effects do our bidding. This works well for the primary shatter, but presents some difficulties when work begins on the secondary shatter. The most intensive research for this project occurs during our attempts to form a convincing secondary shatter. As is often the case, we discover many obstacles during this research. The majority of these obstacles are preventable with a bit of planning. One of the most common obstacles is the involuntary movement of objects as they are turned into active or passive rigid bodies. Freezing the transformations on any object before changing it into a rigid body can easily prevent this. You can easily freeze the transformations by selecting an object and choosing Modify Freeze Transformations. Beware! This will permanently change all of the translation, rotation, and scale settings to their default (zero or one) while leaving the object in place.
Another common problem is reversed normals. When using dynamics, it is important to be sure that the normals on any rigid body are facing the direction of impact. If they are not, interpenetration can result. Interpenetrations are bad for a couple of reasons. First, they allow objects to slide through one another, which is not realistic. Second, they slow the computer down because it has to backtrack in the simulation data and attempt to fix the problem whenever it comes across this. The majority of interpenetrations can be avoided by doing two simple things: First make certain that no rigid bodies begin the simulation in contact. (This includes passive rigid bodies.) Second, try to avoid high-velocity impacts, since the
216
C H A P T E R 7 • Taming Chaos:Controlling Dynamics for Use in Plot-Driven Sequences
computer has a hard time untangling two objects that become thoroughly imbedded in one another over the course of a single frame. You may find that creating a second copy of passive rigid body collision objects and reversing the normals is an effective way to prevent interpenetrations. This is especially true when using planar rigid bodies. Unfortunately, this does produce an extra interpenetration error at every frame, because it creates two coexistent rigid bodies.
The simplest, yet most aggravating, problem that we come across is the interactivity of the simulation—or lack thereof. Without memory caching enabled, stepping from frame to frame to examine the results of the simulation takes an extraordinary amount of time. With memory caching enabled, the cache must be deleted for every object each time that a change is made. Additionally, as the scene gets more complex, we find that the cache seems a bit less reliable. One major source of interactivity problems is evident when the animator steps backward in time. Unless we return directly to the beginning of the simulation, pieces often end up in the wrong places when going backward in time. We believe that this stems from the method used to delay the glass shatters: turning the activity of the rigid bodies on upon reaching the frame at which the glass needs to break. We use a few tricks to let us move around in our simulation. The Graph Editor is one of these. Since we are keying the activity of our active rigid bodies anyway, we can move the activity inducing keys beyond the frame where changes are needed. Doing this bypasses the dynamics calculations and lets us move around in our own animation without using a supercomputer. Of course, this technique is quite useless if the changes depend on objects being in some state other than their initial state. Memory caching is a feature that records the results of a dynamics simulation, allowing the computer to bypass the calculation step in the future, if the cache is not current (which happens whenever changes are made to any rigid body), it must be deleted and recalculated. The computer automatically fills an empty cache, but it must be told to delete the old cache manually. The Enable, Disable, and Delete commands for memory caching are available in the Dynamics menu set, by choosing Solvers Memory Caching Enable,Disable,or Delete. Another problem that comes up often in dynamics is nonrepeatability. We believe that this trouble results largely from floating-point errors, which get worse each time they are sent through a dynamics equation. Think of that old telephone game you may have played when you were a kid: a word is whispered to a person at one end and, through a series of minor changes and misunderstandings, becomes hopelessly garbled by the time it gets to the end of the line. This can become a rude problem for animations like the previous pool break. For this animation, nonrepeatability only bothers us by making secondary shatters harder to time. We cannot keyframe the activity of secondary shatter objects because the frame number may change every time we run the animation. One last problem is that rigid bodies which are later deformed do not act as expected. The structure holding up the glass panes in this scene is bent using lattice deformers to allow the dinosaur to break through. Unfortunately, Maya's dynamics solver treats deformed objects as if they are never deformed for purposes of its collision calculations. Although this is an area of initial concern, a bit of research shows this is not noticeable since the glass
Working Example 2: Shattering Glass Over a Live-Action Plate
Figure 7.24: The structure before deformation
21 7
Figure 7.25: The structure after deformation
explodes away from the structure. Figures 7.24 and 7.25 show the effects of deformation on the structure. A play blast of the structure's deformation is also available on the CD as structureDeform.mov. We now need to consider several methods for controlling the primary shatter. The most obvious method is to use force fields, but other methods include invisible keyframed passive rigid bodies, initial conditions, and impulses. The force fields prove to be finicky and take a lot of adjustments to the attenuation and magnitude attributes to achieve the results desired. Impulses and initial conditions do a good job of getting that first explosive movement imparted to the glass. We make use of all of these methods to ensure that the glass shards move as desired. When working with force fields, always use the Dynamic Relationships Editor to make sure that the fields are linked to the rigid bodies you want them to affect. A good shortcut is to select the bodies you want to affect when making a field. This turns on the links between those bodies and the field by default.
The more difficult effect to achieve is the secondary shatter. The issue here is that glass shards will break again when they hit the ground—this is what we are calling the secondary shatter. The primary shatter is, of course, the first time the glass breaks and falls out of its supporting structure. This is problematic due both to timing and positioning. The secondary shards have to appear in the same position and orientation as the parent shards when they are made visible (and the parent panes made invisible). We try parenting, keyframing, using the cache, expressions, and constraints. Parenting doesn't turn out to be useful, because there is no easy way to unparent on the fly without compromising the placement of secondary shards. Keyframing is not practical due to the sheer number of times the cache will have to be deleted. Expressions work, but take an annoyingly long time to add. In the final result, the secondary shatters are difficult to notice and play a more subconscious role than intended. Therefore, we suggest sticking with the simpler primary shatter when doing work that will be viewed from a distance.
218
C H A P T E R 7 • Taming Chaos: Controlling Dynamics for Use in Plot-Driven Sequences
Figure 7.26: Single shard shot just before secondary shatter
Figure 7.27: Single shard shot shortly after secondary shatte
The method we eventually come up with is rather complex. By using two sets of secondary shards, we are able to treat one set as a single active rigid body so that it looks like a whole piece of glass. The other secondary shards are all constrained by point and orientation to the first set of shards. After adding the constraints (and the order this is done in is important), these shards are also made into passive rigid bodies. Of course, if we just leave them as is, we will have an awful mess of interpenetration warnings. To avoid this, we set the secondary shards to a different dynamics layer by changing the collision layer attribute in the Channel Box. All rigid bodies with matching collision layer numbers (integers) will collide, but rigid bodies with different collision layer numbers will not interact. For added functionality, we have collision layer -1, which is a global collision layer. If we set the secondary shards to a new collision layer, they will never collide with the primary shards, so we need to keep that in mind when reviewing our work for errors later. The last two things we have to do are switching visibility between the two sets of shards and activating the second set of rigid bodies upon impact. Unfortunately, these will require expressions. We have several methods available to us for detecting collisions within secondary shatter expressions. We could turn on collision information caching within the rigidSolver node using the Attribute Editor, but that slows the simulation even further. Instead, we choose to base the activation of these expressions on simple physics—activating the secondary shatter when the shard's velocity on the y-axis becomes positive (when it bounces). Figures 7.26 and 7.27 illustrate what is meant by a secondary shatter. Figure 7.26 is the frame just before a secondary shatter, and Figure 7.27 is a frame shortly after that secondary shatter. A play blast of the results, p a n e S h a t t e r . m o v , is on the CD. The Maya binary file for a simplified example of a single pane shatter ( s i m p l e P a n e E x a m p l e . m b ) and a general expression, s e c o n d a r y . t x t , are on the CD. With the research done for a single pane of glass, it is time to put all of them together.
Production: Easier Than Herding Cats,Just Not by Much Our next step is to begin developing the final, composite model. At this point, we have a need to see some preliminary stuff on a large scale. Remember, just because it works in the testing phase with one pane doesn't mean it will look right when 20 panes are placed side by
Working Example 2: Shattering Glass Over a Live-Action Plate
219
side. In the interest of getting the most feedback as quickly as possible, we complete the primary shatter and initial explosion for every pane of glass first. This will give us a good idea of the overall feel of the scene, and let us better assess how many secondary shatters we actually need. We begin by establishing that the structural elements near the panes of glass will need to be passive rigid bodies: it just wouldn't do to have a pane of glass fall right through a steel support beam. Of course, the ground and masking arch also need to be passive rigid bodies so that glass can collide with them. Next, we perform crack shatter effects on each of the panes of glass (in the Dynamics menu set, choose Effects Shatter and adjust settings for the Crack Shatter effect). During this step it is imperative to make sure that the shards produced by each crack shatter are believable. This effect can produce some crazy-looking shapes. It is also important that we avoid producing any puzzlelike shards. If two shards fit together like pieces in a jigsaw puzzle, an ugly number of interpenetration errors will occur as they begin to move against each other. With this done, each piece is given a mass roughly corresponding to its size. This is done by eyeball and guesswork, as it does not need to be precise. We then give the pieces an initial velocity to kick them out of the frames and add cone shaped radial force fields to help make the shatters diverge some. Shards that need a stronger acceleration are also given an impulse for a few frames. For the finishing touch, we add a gravity field, which has an appropriate magnitude for the units used. Maya's gravity fields default to 9.8, which is the correct magnitude when dealing with units in meters. This simulation was built using inches, because that was the unit the original modeling information was recorded in, so we used 386 instead (the acceleration of gravity in inches per second squared). As always, the only thing that matters is what makes the scene look right, so feel free to experiment with gravity. Since many of the window panes are broken, not by direct contact, but by the bending of the frame around them, we need to determine a timing sequence for activating these panes. This is done by playing the animation of the structure's bending frame by frame and deciding when each pane's window frame is mangled enough that the pane would shatter. Before moving on to secondary shatters, we have a niggling bit of unreality to squelch: when glass windows break, some of the glass remains within the frame, and some pieces are merely loosened and fall a bit later than the rest of the window. For these effects, we will select some likely shards and make them passive rigid bodies. For some of these, we will add keyframes on the activity attribute to release the loose shards a bit after the rest of their windows have fallen out. This is a simple, but important step. The fact that this speeds up the animation a bit is just gravy. Performing a secondary shatter on every shard would be nice, but it's just not practical. Instead, we must determine which shards are large enough to attract the audience's eye and deal with those. To avoid later headaches, we select those pieces we want to perform secondary shatters on and color them differently. The pieces chosen are in yellow in Figure 7.28. With the pieces deserving of an extra helping of digital aggression clearly marked, it is time to begin the secondary shatters. These shatters are performed Figure 7.28: A shot of the model with shards selected for according to the recipe we came up with during our secondary shatter colored yellow
220
C H A P T E R 7 • Taming Chaos:Controlling Dynamics for Use in Plot-Driven Sequences
Figure 7.29: A screen capture of the hypergraph before adding secondary shatters
Figure 7.30: A screen capture of the hypergraph after adding secondary shatters
research and development work. The only real problem is that the hypergraph gets a bit more complex when dealing with this many shards, as illustrated in Figures 7.29 and 7.30. We find that the use of a different modeling layer for the secondary shards helps to select and identify all the shards involved.
As you can see, most of the brain work was done in the research step. The production step is mostly following a recipe and checking to be sure that it looks right. Only after determining that the primary and secondary shatters are performing well should we consider baking the simulation. This example is so complex that tweaking things is going to be more efficient within the realm of the dynamics simulation than it will be once things are keyframed. After baking, we check to ensure that the scene has turned out as desired and make any necessary changes as long as they can be done quickly. If major problems crop up, it's back to dynamics with this one.
Integration: Getting the Animation Ready for Render With most of the animation work done, we quickly add some particle effects to get those little crumbs of glass that always seem to be underfoot after breaking a window. We then turn this shot over to the surfacing and lighting groups who must establish the proper reflection maps and handle transparency. While they are doing this, we can go back and make a simple model of the inside of the building, so that we don't have a black hole behind the model after the windows break out. We could also do this with photographs from inside the building if desired, but because our primary shot during the glass-breaking sequence is a moving one, we figured it would be easier to re-create the inside of the building rather than try to fake a perspective shift with a 2D photograph of the background.
Summary
Figure 7.31: The dinosaur breaks out of the building. After assembling the final scene, it is rendered and composited with the live action footage. Figure 7.31 shows a still from this scene (an animated version, b r e a k O u t F i n a l . m o v , is available on the CD).
Summary One thing should be abundantly clear from this chapter: dynamics simulation, while most assuredly not the same as keyframed animation, is really more an art than a science. Sure, it helps a lot to have a good sense of how the physics of reality work and to like playing with numbers, but the real trick to making Maya's dynamics work for you (instead of the other way around!) is to know when and how to break the rules to get the simulation to behave robustly and eventually give you the right data to include in your larger project. If nothing else, having control over your own little universe teaches you a lot about the real one we live in, explains why activities like playing pool are so frustratingly difficult to master, and explains why you can't put a broken pane of glass back together.
221
Complex Particle Systems Habib Zargarpour Industrial Light and Magic
you've seen it all, you come across a new project that yet again pushes the limits of what can be done. Such was the case when I had just finished working on Star Wars: Episode I The Phantom Menace with its entirely digital shots of pods racing and crashing. I felt there were no challenges left when along came The Perfect Storm with requirements to re-create hundred-foot stormy seas in hundreds of shots. In this chapter, we'll discuss how to approach, plan, and develop particle elements for large-scale projects. We'll use the mist blown off the top of the waves (the crestMist element) in The Perfect Storm as an example. (Figures 8.1 and 8.2 show some test shots for The Perfect Storm.) You will see that, much like in nature itself, developing a technique is an evolutionary process in which you try several methods and eventually only one survives. A good R&D process involves considering many techniques and eventually selecting one of them. Preproduction: Research and Development Preproduction is the phase of a film project that occurs before any filming actually begins. During this phase, all the preparations are made to assure that production will go as smoothly as possible. If the project involves any kind of visual effects, the preproduction phase is the time to prepare for them. These preparations can include gathering reference material or planning to shoot reference during production, creating accurate storyboards,
224
C H A P T E R 8 • Complex Particle Systems
Figure 8.1: The Andrea Gail as seen in the entirely computergraphics (CG) test shot used to determine the feasibility of creating the visual effects
developing 3D animatics, and involving as much R&D as can fit in the budget or timeframe to be able to create the necessary effects. In this section, I'll discuss how to best plan for a project with these preparations and why they are necessary.
Gathering Reference In any kind of project, clarifying what you are trying to create is important. Whether you are re-creating reality and nature on a project such as The Perfect Storm or creating neverbefore-seen space anomalies for a Star Trek project, you need visual reference material that describes what the effect should look like. In the case of re-creating nature, the reference indicates exactly what needs to be done and will haunt you throughout the project as you discover just how difficult and complex re-creating nature really is. In the realm of science fiction, the reference helps to establish guidelines that ground and define an effect that might otherwise be elusive. The reference saves a lot of time that would otherwise be wasted in trying to define the look. In visual effects, indecisiveness can be costly, so you want material that you and the client can agree on as a direction. For science fiction projects, good artwork can also replace reference footage, especially if the client wants animation that looks like nothing anyone has seen before. I place a lot of importance on gathering good reference material, particularly if I am replicating something in nature. The pursuit of creating a natural phenomenon is a never-ending process: you will always want to improve it, and future projects will inevitably need it. For me, gathering reference does not end in preproduction but is an ongoing process. If you have a large crew, organizing the reference and making it accessible online to the entire crew is always essential.
Preproduction: Research and Development
Figure 8.2: An early phase of the CG ocean and environment used in the CG test shot in Figure 8.1
Figure 8.3: Live-action footage from The Perfect Storm, which I used as reference to re-create crestMist. This shot is one of only two in the film that have real stormy seas. The Perfect Storm Copyright 2002 Warner Bros., a Time Warner Entertainment Company, L.P. All rights reserved.
are separated from the surface of the water, blown off the top of the waves, and carried away with the wind. What makes this element challenging is how it transitions from dense water into light spray and vaporizes.
if you haven't already get The Perfect Storm on DVD. it was created from a digitallytimed version of the film and is a great way to see the examples in this chapter in motion. Early in the project's R&D phase, the crew took a field trip on a 50-foot fishing boat out beyond the Golden Gate Bridge to experience firsthand what they were trying to re-create, minus the 80-foot waves and hurricane winds! The 6-foot seas actually did get quite rough (see Figure 8.4). While that doesn't sound like much, it was enough to make half the
225
226
C H A P T E R 8 • Complex Particle Systems
Early in the project's R&D phase, the crew took a field trip on a 50-foot fishing boat out beyond the Golden Gate Bridge to experience firsthand what they were trying to re-create, minus the 80-foot waves and hurricane winds! The 6-foot seas actually did get quite rough (see Figure 8.4). While that doesn't sound like much, it was enough to make half the crew seasick. In the image, you can see the splash resulting from the bow of the ship slamming against a wave. The splash consists of chunks of water that spread out and transform into smaller drops of water. The wind will cause the splash droplets to become even finer, turning them into mist. At one point in the research, I went up in a chopper to get images of breaking waves directly from above. Figure 8.5 shows a still from the chopper, which at one point got within 3 feet of the water surface. A word to the wise: never challenge a helicopter pilot on how low they can fly! In this chapter, when we refer to wave height in feet, we are measuring from the trough (the lowest point) to the crest (the highest point) of the wave.
The Planning Stage So you've been asked to do the impossible. Now what will it take to do it and how long will you need? That may also be impossible to predict, but you have to at least try. In this section, I'll discuss what to consider when estimating duration and crew for a project. Interpreting Storyboards Other than the script, you will usually also need storyboards in the bidding and preproduction phase to assess exactly what needs to happen in the shot or sequence. You need to know how to interpret the storyboards, what to look for that can help you predict problems, and how to use storyboards to solve these problems. You can use storyboards to answer the following questions: • Is the camera going to be moving? • How large is the effect going to be onscreen? • Will other objects or actors be in front of the effect?
Figure 8.4: Image from a fishing boat about the size of the Andrea Gail
Preproduction: Research and Development
• • • • •
How close are we going to be to the effect and how much detail will be visible? How many more shots are there like this one? Will this shot be cut in between other shots with practical effects? How many 2D and 3D shots are there going to be in total? Is the effect going to be coming at the camera? This is the most important question if the effect involves particles.
You should be able to answer most of these questions from the storyboards, with the exception of the camera motion if it is not indicated. A particle effect that needs to come toward the camera is an immediate red flag in terms of resources, R&D time, and rendering effort. In the example storyboard in Figure 8.6, you can see the letters "VFX." indicating that this shot will need some visual effects and cannot be done entirely as a practical or liveaction shot. You can see that actors will be close to the camera in rough water. The arrows indicate that the camera will tilt down to follow the jump of the rescue diver. If you are lucky, you will have access to 3D animatics from a process called "previs," which stands for pre-visualization. Previs, which are becoming quite popular, usually consist of rough 3D animations that are like moving storyboards. They indicate the action that takes place, the composition, and what the camera sees. Good animatics also include accurate lens measurements as seen at the bottom of Figure 8.7. For The Perfect Storm, we created the animatics ourselves as part of the preproduction phase, and we took advantage of the opportunity to make them as accurate and realistic as we could. We used physically correct boat dynamics and the actual ocean simulation data that we were going to use in production. Thus, creating the animatics became the way we
Figure 8.5: A photograph taken from a chopper directly over some breaking waves. Opensea-breaking waves are different from shore-breaking waves, but the foam left behind is similar.
227
228 C H A P T E R 8 • Complex Particle Systems
animated the shots for the real show, and then we refined the movements from the simulations as necessary. Our animators could "steer" the boats by animating the rudder and control the throttle to get realistic results. It might seem simple to animate a boat over waves, but as we learned from trying to refine the simulations, it's an exact, unforgiving science. Our animatics were a good indication of what the camera would be doing in a shot and also gave the stage crew a good reference for the continuity of the action when they were shooting the boats on stage. So now you have seen the storyboard and the animatic. Figure 8.8 shows a frame from the final result, in which all the planning paid off. It answers some of the questions posed earlier: • The camera will be moving. • The effect will cover 75 percent of the screen. • Actors will be in front of the effect with medium detail visible. • The particle effects will not be coming at the camera.
Figure 8.7: A 3D animatic for the storyboard in Figure 8.6
Figure 8.6: A storyboard (by storyboard artist Phil Keller) from the chopper rescue sequence of The Perfect Storm
Preproduction: Research and Development
Figure 8.8: The final composite of the shot in Figure 8.6, shown here with the full height of the action as the camera pans up and back down. You can see a breakdown of this shot by viewing m r l 3 5 1 0 4 _ b r e a k d o w n 2 . m o v on the CD-ROM. The Perfect Storm Copyright2002 Warner Bros., a Time Warner Entertainment Company, L.P. All rights reserved. To better understand how some of the elements connect, take a look at Figures 8.9, 8.10, and 8.11. The blue-screen image in Figure 8.9 is used in the foreground for the water and the crew in it, but all the water and activity from a few feet in front of them to the horizon was computer generated to match. The biggest challenge here was to find a piece of wave from more than 50 simulated oceans that would match the live-action movement of the water filmed in the tank. This was done by our animators who meticulously tried matching section after section of ocean until one of them finally worked. In the future, one can imagine simulation software that would automatically model the foreground ocean and extrapolate the rest of the surrounding ocean. The CG ocean in Figure 8.11 is composited with the CG mist in Figure 8.10, along with a dozen other CG elements not shown here, to give the final composite in Figure 8.8. The question of whether a shot will be cut in between other live-action shots with practical effects is important because then you know your effects need to match the surrounding footage. This usually means a lot more hard work because the results have to be seamless; matching anything is difficult, let alone matching reality.
229
230
C H A P T E R 8 • Complex Particle Systems
Figure 8.9: This blue-screen element of the Mistral crew shot in the 100-by-95-foot indoor tank, the largest of its kind at the time, and some light foreground mist are the only practical (real) elements in the final composite shown in Figure 8.8. Sometimes the research for one effect can be applied to another. In developing crestMist, we realized that many other types of mist could benefit from some of the work, such as the following: • • • • •
crestWisp: mist blown off small waves of about 3 feet or smaller mist: atmospheric mist that included background, mid-ground, and foreground mist troughMist: mist that traveled near the ocean surface like a blanket splashMist: mist coming off a splash from the boat pounding the waves chopperMist: resulting from the blast of air generated from the rescue helicopter's blades By now you have noticed all the unique names for various elements. I'm a stickler for naming conventions, and since we were going to create many elements, we might as well name each one in a way that we would know which part of a shot we were talking about when communicating in dailies, shot reviews, and e-mail. The names should be simple and short and convey clearly what the element is.
Resources and Time At this stage, you probably don't have a clear idea of the final budget because you are creating a test that will determine whether you get the project. You also don't know if what you have been asked to do is even possible or whether you need to use an as-yet-to-be-invented technique that
Choosing the Best Method: Which Type of Simulation to Use
Figure 8.10: The CG crestMist generated for this shot had to be combined with the mist generated from the CG helicopter seen here together.
231
Figure 8.11: The CG ocean generated for this shot had to match and connect with the waves in the live-action footage in Figure 8.9, including the motion blur.
has completely unknown repercussions. It is still important to outline the possibilities in terms of technique and estimate how many resources and how much time each will require. You can always approach a problem in more than one way, but one technique will always be better than the rest. You need to determine how many people it will take to complete the R&D for each task. You then use this estimate to determine the budget for the R&D phase along with the resources and time required. You're in luck if the project is similar to work done in the past, but frequently this is not the case because directors want more and more to include never-before-seen effects in their films. You might need to create entirely new simulation engines and, in some cases, custom renderers, which is why it's important to communicate with your software group at the planning stage (or create a software group if you don't have one). If the software group is aware of your challenges and issues from the start, they may be able to anticipate some of your needs. The number of shots in which a particular effect will be used determines how streamlined the process needs to be. If the effect will only be used in one or two key shots, spending resources to allow it to be easy to use or understand doesn't make sense. On the other hand, if the effect will be used in hundreds of shots, it is clearly worthwhile to spend resources on streamlining, documentation, and testing in preproduction.
Choosing the Best Method: Which Type of Simulation to Use Before looking at doing fluid dynamics or particle simulation of any kind, you need to broaden the scope of possibilities to include any and all methods to get the job done. Usually the process of elimination will take care of this, but the best place to start is to make a list of any technique that might work. In this section, we'll look at all the viable possibilities for generating crestMist, and I'll discuss why some techniques might be better than others.
232
C H A P T E R 8 • Complex Particle Systems
Can It Be Done Using Practical Techniques? One reason for my attraction to Industrial Light and Magic, and clearly the advantage of working in a multidisciplinary effects house, is the opportunity to combine live-action practical elements with computer-generated elements. The first question always asked when tackling a project is whether you can achieve the effect or produce a particular element using practical photographic techniques. For The Perfect Storm, we shot many practical splashes of water for reference and also for use as elements in shots. When it came time to use such elements, we discovered how angle dependent each situation was, which made it difficult to use two-dimensional elements within a three-dimensional shot. The motion of the camera and the waves made it problematic to tie in these elements, and so few of them were eventually used. One type of practical element that was successful was a general foreground mist that was used in many shots to represent the airborne particles of water passing within a few feet of the camera. Figure 8.12 shows a frame of practical water spray blown by wind, filmed against a blue screen and used in many shots as a subtle foreground layer of mist. This is a good example of simply using what works best. If the layer were computer generated, it would have been a costly simulation and rendering that might not have been visible anyway in the final composite. For most other elements, we found that we had to resort to completely 3D computergenerated techniques. While working on the test shot that would ultimately determine whether we could do the project, the crestMist element didn't even register on the radar as significant or difficult. We had created a subtle version of this element using some simple techniques only to later discover that not only is this element key to establishing the illusion of a "screaming" storm, as the director Wolfgang Peterson would call it, but it also became one of the most complex elements to create. As it turned out, "It's all about the mist!" When we got reference footage from an actual shoot on the open seas in high winds (Figure 8.3), we realized the staggering density and complexity of the element: water getting picked up from the crest of a breaking wave by the wind and blown backward, with the denser parts coming back down faster and the rest becoming mist Figure 8.12: A layer of practical mist shot against a blue screen and vaporizing! if the effect you need appears only in one shot, you might consider a practical element because the development costs may far outweigh the one time the effect will be used. The more shots and camera angles in which you need the effect, the more a CG element will pay off. Of course, if what you need cannot be done using practical means, that makes the decision easier.
Choosing the Best Method: Which Type of Simulation to Use
Figure 8.13: An all CG shot from The Perfect Storm. The Perfect Storm Copyright 2002 Warner Bros., a Time Warner Entertainment Company, L.P. All rights reserved. If the director says, "I want it to look like steam coming out from a nozzle and dissipating naturally," your best bet is to shoot a practical element. On the other hand, if the director says, "I want it to look like nothing we've seen before, and it needs to wrap around the actress's head and then vanish to the left," clearly CG is your best bet. If you need realistic gas interacting with a computer-generated character or object, in most cases you're better off attempting to create the element in CG. The Perfect Storm trivia: From the more than 400 visual effects shots in the movie, roughly 200 involved 3D work such as adding the ocean, and of those, 96 shots were entirely CG, including the boat, ocean, splashes, mist, rigging, sky, lightning, drips, and crew (see, for example, Figure 8.13). Many of these shots show the boat almost filling the screen and are cut back to back between live-action shots filmed on set in the 100-by-95-foot tank on Stage 1 6 at the Warner Bros. Studios in Burbank, California. For many of the shots in The Perfect Storm, the nature of the action demanded detailed integration of the motion of the waves with the movement of the boat and the water splashes. The most logical approach for these shots was to make everything digitally since the ocean, splashes, sky, and atmospheric haze were already digital. Figure 8.13 shows an example of such a shot in which the boat and crew are also digitally generated. To see a moving version of Figure 8.13, visit www.howstuffworks.com/perfect-storm.htm.
233
234
C H A P T E R 8 • Complex Particle Systems
Defining the Problem: What's My Motivation? The reference material is a good start for defining the problem, but it helps everyone involved in the project if you state the problem using specifics instead of just saying "make it look like that." Using specific terms to define the problem helps you think differently when working on a solution and helps define the scope of what needs to be done. In the crestMist example, the problem definition is to create a technique to generate images of wind-blown water being picked up off the crest of selected waves in a scene and transforming into fine water vapor mist that eventually vanishes into the air. Figure 8.14 shows this transformation from crestFoam into crestMist that vanishes from a profile view. The mist needs to be relatively opaque at the base and translucent once airborne. The user should be able to control which waves give rise to crestMist as well as every aspect of the mist's quantity and motion, such as the acceleration, upward rise, backswept speed, dissipation rate, and overall density. The mist needs to work when viewed from any angle on the ocean surface. Because of the extreme height of the waves, in many cases the mist is seen from below and needs to appear across the entire visible ocean. Depending on the field of view of a particular shot, you could need to cover many square miles of stormy ocean.
Laying Out the Options Even though you have an idea about the best approach, you need to separate yourself from any favorites and consider many techniques. Using this approach, you can proceed with more confidence into the final phase of selecting the best method because you won't have the fear of "what if there was a better way and we missed it?" The wider the scope of initial possibilities, the more sure you will be with your final choice. After laying out many viable ideas, I like to consult with the other members of my R&D team to make sure I didn't miss any other good approaches. It is also good to meet with the R&D team and the in-house software development group (if you have one) to draw all the possibilities on a whiteboard and list the pros and cons of each in columns—something about
figure 8.14: The example crestMist reference with the various parts named. The Perfect Storm Copyright 2002 Warner Bros., a Time Warner Entertainment Company, L.P. All rights reserved.
Choosing the Best Method: Which Type of Simulation to Use
doing it that way helps with the process of elimination. If you are lucky, some of the larger decisions will have already been made by your visual effects supervisor, such as whether the show will be primarily done with miniatures, full-scale practical effects, or CG. If you are the visual effects supervisor, good luck. For The Perfect Storm, the decision was based on the minimum scale of water in miniatures and the size of the waves that needed to be re-created. Clearly, using a real stormy sea with 150-foot waves was out of the question since you would have to wait another century and you would likely not survive it, let alone film it. In fact, we had a difficult time finding good reference footage because, for some strange reason, when people's lives are in danger, filming is the last thing on their mind! What were they thinking? In the movie, two shots were filmed on a real stormy sea (about 10-foot waves). All the rest were either shot on stage in a tank with CG ocean extension or were entirely CG shots. Water, like fire, is an element with which people are familiar, and you can't reduce the scale of a miniature past a certain point without the drops of water looking disproportionately large. I recall a classic film from the late '70s or early '80s that used miniature water effects; the droplets were about the size of people's heads. The smallest scale you would want to use with water involving splashes would be one-fourth or one-fifth scale. At one-fourth scale, a 100-foot wave is 20 feet high—not much of a miniature. In The Perfect Storm, we also needed to cover many square miles of ocean, which would be a vast area even at one-fifth scale. So the logistics of composing and creating this deadly and hostile environment and carefully designing each shot pointed to primarily using computer-generated oceans and waves. In the next section, I'll discuss the details of how we created stormy seas on the computer. For completeness, here are the options we considered when deciding how to create the crestMist effect: • • • • • •
Full-scale practical elements Miniature elements Computational fluid dynamics integrated within vendor software such as Maya Computational fluid dynamics using a standalone in-house tool Particle simulation in vendor software such as Maya Particle simulation in Maya with custom field plug-ins
Why Separate the Elements? When we started the project, we all imagined pushing a button, the computers crunching the numbers for the computational fluid dynamics, and the shot popping out! ! Yeah, yeah—we should know better by now, but you always have that hope at the beginning. As we worked on our test shot, it became evident that each and every element needed to be run separately for several important reasons: • If we had to redo any one element, we wouldn't have to re-simulate/render all the other elements. • We had control over each element independently in the compositing stage. If one element was too bright, we could adjust it without having all the rest change with it. • It is easier to split the work between several artists if you are on a tight schedule. • If we didn't have independent control of each element, too much would be left to chance, and we could not control or predict any part of it.
235
236
C H A P T E R 8 • Complex Particle Systems
For example, the director asks you to increase the height of the waves in a particular shot. This request will clearly mean that the motion of the boat will have to be re-simulated/animated. The resultant splash from the boat impacting a wave will be bigger and cause too much mist to cover the boat. Now if the entire system is one big fluid dynamic simulation, you would have no direct control over individual splashes or events; you would control only general global parameters such as viscosity, wind, and gravity. In movies, there is no such thing as "well that's just what would really happen, so sorry we can't change it." The director has a certain vision for each shot and wants control over as many aspects as possible (additionally, most controls are used at the request of the visual effects supervisor to hone in a shot). Directors have learned to accept the limitations of the real world when involved with practical live-action effects, but they are choosing to embrace the realm of digital effects with the promise that anything is possible and everything can be changed. It is our job, when setting up such effects, to ensure that we can adjust all the important aspects of an effect. By separating the elements logically, we help both the artist and the director to attain their goals. Figures 8.15 through 8.22 show several of the dozens of elements that composed the shot from Figure 8.23.
Figure 8.15: Eight elements of the all-CG shot from Figure 8.13. The gray shaded version of the Andrea Gail.
Figure 8.16: The Andrea Gail rendered with textures and materials
Choosing the Best Method: Which Type of Simulation to Use
Figure 8.17: Close-up of the ship showing the digital stunt doubles and simulated buoys
Figure 8.18: The simulated ocean rendered with all the various types of foam, including the wake from the boat
Figure 8.19: Run-off pass of water from previous waves hitting the boat. You can see this element in motion by viewing p w l 7 3 0 2 0 _ r u n O f f . m o v on the CD-ROM.
237
238
C H A P T E R 8 • Complex Particle Systems
Figure 8.20: Splashes around the boat generated by its movements against the ocean surface. You can see this element in motion by vieiving p w l 7 3 0 2 0 _ s i d e S p l a s h . m o v on the CD-ROM.
Figure 8.21: The splash created by a wave crashing onto the boat
Figure 8.22: Volumetric render of the light beams from spotlights on the boat
Choosing the Best Method: Which Type of Simulation to Use
Figure 8.23: As you saw earlier in Figure 8.13, here is the final frame of the shot with all the elements integrated. The Perfect Storm Copyright 2002 Warner Bros., a Time Warner Entertainment Company, L.P. All rights reserved.
Selecting the Best Option: Brainstorm Sessions I tremendously enjoy the process of gathering the best minds on a project, throwing around ideas about what could work, and listing all the pros and cons of each method. It is best to be detached from any one method and try to find the solution that will work and that will also be efficient and elegant. We have all heard that it doesn't matter how you do something in visual effects; what matters is the look in the end. Whether you use rubber bands and duct tape or the latest technological breakthrough, the results are all that matter. That said, you do want to make sure the process is as simple and easy as possible, which will make the crew and yourself happier. Fluid Dynamics versus Particle Simulation As recently as four years, ago using computational fluid dynamics in production was out of the question. The complexity of algorithms combined with the lack of machine power left the field only to scientific visualization and academic studies. A January 1997 Scientific American article titled "Tackling Turbulence with Supercomputers" estimated that it would take several thousand years to simulate the airflow over a commercial airliner for one second of flight using the world's fastest computer (1 teraflop machine) that had not been built yet. Only a few years later, we can dabble in the realm of fluid dynamics in real time using interactive mouse strokes on affordable PC platforms. Maya 4.5 has a nice new set of fluid dynamic tools. Table 8.1 shows the pros and cons of using fluid dynamics for simulating crest Mist, and Table 8.2 shows the pros and cons of using particle simulation for simulating crest Mist.
239
240
C H A P T E R 8 • Complex Particle Systems
Table 8.1: Using Fluid Dynamics to Simulate crestMist Pros
Cons
In the case of crestMist, one of the requirements is to cover miles of ocean with waves of up to 150 feet in height. A key factor in using fluid dynamic simulations is the size of the volume it needs to cover. Within this volume, the amount of subdivision of the grid determines the level of detail you get from the simulation. For example, a volume that is 1 mile by 1 mile by 150 feet needs a grid that is 5280 by 5280 by 150, which results in severely slow simulations that take enormous amounts of memory if you want the finest level of detail to be 1 foot. In practice, because of the low visibility in stormy conditions, we rarely had to generate crestMist beyond that range. There was the possibility of using multi-resolution nested simulations, but we would need to track each and every wave that was generating mist with a high-resolution grid, which would have been very complex. The advantage of using particle simulation is that the level of detail of the motion can be as small as you want over as large an area as you want. The disadvantage over the fluid dynamic grid simulation is that you will need many particles to establish a gaseous misty look that can be done easily with volumetric rendering of grids. The other advantage of using fluid dynamics is that the ocean and air are simulated at the same time, forcing the mist and waves to move together cohesively within the same space. Using particle simulation, you would have to generate artificial forces that would allow the particles to respond to the rising and falling of the ocean surface. For The Perfect Storm, we were simulating the oceans separately over a coarser grid that was fine enough to define the broader waves but not fine enough to get detailed mist turbulence. Eventually we chose to use particle simulation as our primary technique for crestMist. Using particle simulation allowed us to get the large coverage we needed with as much detail in the motion of the particles as necessary without having to deal with exorbitantly high simulation times.
Visualization Techniques: Tips and Tricks
The Importance of Real-Time Feedback in Simulations The success of any good computer-generated effect depends heavily on the number of iterations you are able to do before the shot or effect is final. The more times you can run the shot or simulation, see the results, get feedback from the director or VFX supervisor, make adjustments, and rerun the simulation or redo a render, the better the outcome. There is the rare danger of noodling a shot past its prime and realizing you should have left well enough alone, but you can always go back to what you had. Most effects houses rely on overnight rendering of entire shots that can be judged the next day, and we'll keep dreaming of the day when real-time rendering will put an end even to that. Current computers are fast enough to show modelers their work in shaded views in real time as they adjust the shapes. Depending on the complexity of the model, you can get real-time feedback or pre-record animation within minutes to see a playback in real time and judge it. Lighting tools have been improved to give the artist some idea of what they're doing in shaded mode by taking advantage of the graphics hardware, and you can see those renders within an average 5 to 10 minutes for an element. When it comes to particle simulation, however, we users are not as lucky. Running simulations can take anywhere from 5 minutes to several days, sometimes gobbling up entire 32-processor render servers and gigabytes of memory.
Visualization Techniques: Tips and Tricks Getting just what the director ordered using simulations can be difficult if not impossible at times. The particles just won't do what you want them to do. They are going to fight you every step of the way. Controlling particles is like trying to organize a group of toddlers who have had too much sugar. In this section, we'll discuss some techniques that can make the job a bit easier.
Using Color Encoding Using dynamics to create particle simulations involves manipulating many variables and understanding how to interpret them. Although you can access data in a number of ways, programmers tend to use printouts to see what's going on in a program—a solution that sometimes works with Maya too. In Maya, you can insert print statements in creation or runtime expressions. Inserting print statements works if you are interested in information about a single particle or specific particles that you can identify by their p a r t i c 1 e I d attribute, but in practice, using print statements is impractical for large numbers of particles. Fortunately you can also use numeric display in Maya. To access numeric display, follow these steps: 1. Select the particle node, and open its Attribute Editor. 2. In the Render Attributes section, you will find the Particle Render Type attribute. Change points to numeric, and click Add Attributes for Current Render Type to create the corresponding attributes for this display type. 3. Enter the name of the attribute you want to see in the Attribute Name field. Maya displays that attribute in the Viewer window for every particle.
241
242
C H A P T E R 8 • Complex Particle Systems
3
A drawback of using numeric display is sheer visibility when thousands of particles are involved. Seeing and making sense of the data becomes difficult, especially when you have to read one piece of data at a time and interpret the numbers. One solution is to use the Selected Only option box in the Attribute Editor so that only the data for the selected particles is displayed. You must remember to turn this feature off the next time you want to use the numeric display capability, or the next time you use numeric display, you'll wonder why no numeric data is being displayed or why the particles aren't displayed in color.
I often use color encoding to visualize the multitude of attributes and data on thousands of particles. This involves the use of a temporary runtime rule to place the information you are trying to see directly within each particle, as shown in Figure 8.24. The following is an example of such an expression: v e c t o r $vel = v e l o c i t y ; rgbPP = <<#vel.y/5.0, a g e / l i f e s p a n , r a d i u s * 1 0 . 0 > > ; This example uses the color red to measure the upward velocity of any particle. Because the components of vector attributes cannot be accessed directly within MEL expressions, we assign the velocity vector to a temporary vector variable called $vel. You must scale different variables in order to fit them within the 0 to 1 range of a color; in this case dividing velocity in Y by 5 and multiplying radius by 10 normalizes each value appropriately. The parameter age divided by 1 i f e s p a n is always within the 0 to 1 range
Using Spline Controllers A clean, simple interface is important for the user. If you are setting up an effect to be used by many artists, take advantage of some features in the software that can help provide clarity. For
Figure 8.24: A sample image showing the use of color in a splash simulation in The Perfect Storm
Visualization Techniques: Tips and Tricks
example, for The Perfect Storm we used spline controllers. Spline controllers are simply nodes that are curve primitives built into specific shapes that are instantly recognizable for what they do. If you are creating a custom plug-in such as a force field, you can also create your own custom manipulators that don't need a separate node to contain them. In Figure 8.25, you can see the use of spline controllers shaped into simple arrows to control the wind and wave direction. You can also see the connections between the spline controllers and the expression node in the Hypergraph. We used color coding to show which arrow was for which function. In this case, we used the larger purple arrow to set the wind direction, and we used the smaller green arrow to set the wave direction. The wave direction arrow was a slave to the wind direction by default, since this is generally the case in stormy sea situations. However, for creative cinematic reasons or in certain situations, you can set the two directions independently, sometimes even in opposite directions, by simply breaking the rotational constraint between them.
Streamlining for Ease of Use: A Centralized Interface Although the particle systems are complex, you want controlling them to be simple. By placing the most commonly accessed attributes in the expression node, you will have all the
Figure 8.25: Spline arrows used to control the wind and wave directions in a scene in The Perfect Storm. The use of color coding was important. Here the purple arrow controls the wind direction, and the green arrow controls the wave direction.
243
244
C H A P T E R 8 • Complex Particle Systems
Figure 8.26: The centralized control node for crestMist is a spline in the shape of an E.
attributes in one place. To use an attribute, simply select the appropriate E expression node in the Perspective window without even opening the Outliner to look for it. In Figure 8.26, you can see the attributes added to the crestMist expression node in the Channel box. The w i n d A m p attribute controls the overall wind force, and s p r a y U p A m o u n t is the initial upward velocity. Units are not important here. These units are simply the numbers that gave the best visual results for the average wave height. We had to adjust the numbers for each shot, depending on the wave height and the violence level of the storm at that particular time in the film. As you develop your effect and use it in a test shot, it will become evident which of the attributes you are accessing most frequently. Make a list of these attributes and start prioritizing them so that you can select only the most important to be connected to the centralized control node. You don't want to connect too many attributes for two reasons: • It will get too confusing. • The evaluation order in Maya can be affected by the connections in the Hypergraph, so you might modify the original behavior of your setup if you go overboard with the connections. Once you have made your top attribute list, organize the items into groups that make sense, and add them to the centralized control node in that order so that they appear in the Channel box with that organization. For example, if you have a global switch to turn the expression on or off, that attribute should be either at the very top or at the bottom of the
Visualization Techniques: Tips and Tricks
list. Vector attributes need to stay together in an X, Y, and Z order, top to bottom. As your testing continues, you can add or remove attributes as they become more or less important as part of the centralized control, and by the time you use the attribute on a real shot, it should be just what you need.
Using Global Variables When you are creating a complex effect using particle expressions, MEL expression nodes, and built-in and custom fields, you need a fast, efficient way to communicate certain variables to those expressions. Using the power of global variables, you can define a global variable within a MEL expression in your main controller node and access it from within one or many particle expressions with great speed. In our crestMist example, you can easily select the main controller node: in Figure 8.26 earlier in this chapter, it's the selected node in the shape of an E. The following is an example of a MEL expression within the main controller node: global vector $windDirection; global float $ w i n d A m p ; // find the direction of the a r r o w in w o r l d s p a c e vector $ w i n d C e n t e r P o s = xform -q - w s -t $ w i n d C e n t e r ; vector $ w i n d F o r w a r d P o s = xform -q - w s -t $ w i n d F o r w a r d ; $windDirection = $ w i n d F o r w a r d P o s - $ w i n d C e n t e r P o s ; windDirection.windDirX = $windDirection.x; windDirection.windDirY = $windDirection.y; windDirection.windDirZ = $windDirection.z;
The following is an example of a particle runtime expression using the global variables windDirection and windAmp: v e l o c i t y += $ w i n d D i r e c t i o n * $ w i n d A m p ;
When you use global variables, you can access them anywhere in the scene, and you don't risk modifying the evaluation order in the Hypergraph because you haven't made any direct links between two nodes' attributes. When multiplying a vector, be sure that you don't multiply it by another vector unless you are really interested in the dot product of the two, because that's what you'll get. A dot product is essentially the projected length of one vector onto another.
Locking and Hiding Unused Channels To clarify what the user needs to control, it is good practice to lock and hide any attributes that the user should not touch or worry about. For example, if you have a node that is only used for setting parameters for an expression and does not appear in the perspective window, lock and hide all transform attributes so as to avoid confusion about which parameters are meaningful.
245
246 C H A P T E R 8 • Complex Particle Systems
Figure 8.27: The Channel Box with only the relevant attributes of the node visible In Figure 8.27, you can see an example of the attributes on the main controller node for crestMist. The regular transform attributes such as t r a n s l a t i o n , r o t a t i o n , and s c a l e have been hidden, and only the added user-defined attributes are visible. At the top of the Channel Box, the most important attributes are placed, such as the s p r a y U p A m o u n t , which determines the initial velocity of the mist, and w i n d A m p , which is the global wind speed. Some of the attributes are simply connections to the same attribute on another node and are connected here to make them easier to access. For example, here the L O D attribute is connected to the 1 e v e l O f D e t a i 1 attribute of the p a r t i c l e S h a p e node. The LOD simply reduces the particle count as a percentage so you can see faster simulations with a fraction of the particles without changing all your emitter rates. An L O D of 0.5 would mean that Maya will keep only 50 percent of all the emitted particles and discard the rest.
Putting the Method to the Test: Pushing the Limits Now you have put yourself on the line and have selected one method over the rest as the best way to get the job done, it's time to put that method to the test and see if it can do everything it's supposed to do. Here are some points to keep in mind during the test phase.
Control versus Simulation How much do you need to be able to control and how much can you leave to luck? This is an important question because you will never really have full control over everything. If you did,
Advanced Techniques
you would be keyframing each particle by hand! The whole concept of using simulation is to try to control millions of tiny elements by using broad global controls and rules. The answer to how much control you will need depends largely on the director. Some directors are more easy going than others about the details in a shot. The realistic nature of what you are trying to create will also determine how much control you will need. You will have to take comments such as "make it scarier" or "it needs to feel menacing" and interpret which controls to change on your simulation to fulfill the request. Vendor versus In-House Software if you are part of a company that has developers and programmers, you have the option of writing your own custom software to do some or all of the work for the visual effects. You need to determine how many of the tasks need custom software and how much can be done with off-the-shelf vendor software such as Maya. Table 8.3 lists the pros and cons of using Maya, and Table 8.4 lists the pros and cons of developing custom software.
Advanced Techniques As you embark on your R&D task, you have to determine which forces you need and how much control you need over them. In this section, we'll describe some techniques and give you some ideas about how to approach the problem. How you think about a problem will determine the quality of your solution. Knowing the forces you'll need and creating the forces you don't readily have available can make or break an R&D project.
247
248
C H A P T E R 8 • Complex Particle Systems
Particle Simulation Is Like Cooking It's all in the mix; use your instincts. The actual value of the numbers and magnitudes and their precision compared with real life are not important. What matters is the results and how they look. You may have a correct number for gravity, but if the particles look like they are falling too fast, just adjust the gravity until the particles look right. Figure 8.28 shows an example of how to mentally make a picture of the forces involved. Think about all the forces and size Figure 8.28: Diagram showing how one can mentally picand importance of each contributure the balance of forces in a simulation tion. You may not be able to accurately predict the contributions at first, but as the tests begin, it will become clearer if you have overlooked a force, in our case the wave-rising force, or if you have overestimated the importance of one, in our case gravity. Make as many of these sketches as you can to help narrow down which forces you need and how much time should be spent perfecting each.
Emitting the Particles in the Right Place Sometimes using the built-in Maya emitters will work for the job; at other times you need to develop custom emitters for the task. At the start of the R&D for The Perfect Storm, we thought the important part of simulations is what you do with the particles after they are born. It turned out that how and where they are born was half the battle. If the particles don't end up where you want them, the rest of the time will be wasted trying to get them back on track. We spent a great deal of time trying out any method of emission we could think of that made sense and was available to us in the software, and some of these methods paid off for other elements. But in the case of crestMist as well as crestFoam, which is where the mist comes from, the two had to match. They also had to be emitted at the right spot on the waves and be easily controllable by the artists who would have to suppress mist and foam on some waves and enforce them on others. Because of the number of waves and the 80 different ocean simulations, we needed an emission method that would give us procedural control over the emission; that is, manually placing splines on the break points of waves would not be viable in most cases. We eventually narrowed down the emission to a custom plug-in that could read the original simulation data and determine which waves were breaking and where. This data was then used to emit particles in the right places. Another plug-in could analyze the wave structures and place particles where appropriate. Both emitters could be easily controlled to not emit from any given wave. Figure 8.29 shows the result of these emitters as they generated the foam from breaking waves as seen from above. The image covers an area about 1000 feet wide.
Advanced Techniques
Figure 8.29: The top view of a section of the ocean showing the crestMistparticles emitted at the crest of select waves
Using the emit Command A technique that dates from AliaslWavefront's Dynamation that you have access to in Maya is the use of the e m i t command. With it, you can generate specific particles and determine every detail about their attributes, such as position, velocity, life span, color, and any other user-defined attribute. The two images in Figure 8.30 show the tests of applying a triple emission method to the crestMist particles. Triple emission means that an emitter creates a main particleA, which then uses the e m i t command in its runtime rule to emit particleB, which then uses the e m i t command to emit particleC. This chain of emissions gives the artist more explicit control over the structure of the particles. At each phase, fewer particles have to be controlled, and each time the e m i t command is used, the position and velocity given to the next particle can be specifically designed. On the one hand, this technique simplifies the process, because it gives you isolated controls at each level since each particle set can be connected to completely different dynamic forces. On the other hand, this technique can complicate things because of the interdependence of each particle to its parent particle. Since crestMist had a clumpy look to it at the base of the sea foam as it lifted off the surface, we thought this method might lend itself well to this problem, but eventually that turned out not to be the case, though we did get some interesting results. During the crestMist R&D phase, we ultimately determined that the triple emission technique was not going to work for what we needed because it looked too much like snow.
249
250
C H A P T E R 8 • Complex Particle Systems
Figure 8.30: Images of the triple emission cycle in one of the attempts to add structure to crestMist. ParticleA emits particleB, which then emits particleC. This method ultimately gave too much structure, which looked like snow.
Advanced Techniques
Figure 8.31: The sum of three force vectors from three different fields and the resultant vector
251
Figure 8.32: A turbulence field needs to move slightly faster than the particles blown by a wind field.
Isolating Forces: Vector Sums When you deal with complex simulations, many forces are involved. Some of these will be fields, and others will be forces added within a runtime expression. Ultimately you need to run your tests with all forces active, but to help you see what each force is contributing, it is best to isolate each one individually when fine-tuning them. It is true that in most cases the forces are interdependent and that turning off one field will make all other forces behave differently. But it is also true that you can isolate each force in a meaningful way to be able to adjust parameters better. Figure 8.31 shows a sketch of how you can graphically add the forces involved to get a resultant final vector. Typically, you need to isolate the turbulence field. It is important to find the correct frequency for the turbulence. If the frequency is too high, the turbulence will display as fast jittery movement, and if the frequency is too low, the turbulence field can act as a localized uniform field. Once you isolate the turbulence field by temporarily reducing the magnitude of the other fields to 0.0 or disconnecting them, you should be able to see the effects of the turbulence on the particles. Sometimes temporarily exaggerating the magnitude of the turbulence (or another field) will give you a better sense of its subtle effect. Figure 8.32 shows how a turbulence field needs to move slightly faster than the uniform force acting as wind for mist particles. You can think of turbulence as a complex maze of invisible barriers. Moving these barriers 10 or 20 percent faster than the uniform directional particle speed will in effect tear off parts of the particles it comes in contact with, much like real stormy wind. Wavelength also affects turbulence animation. Normally, the larger the wavelength, the faster you want to move the turbulence. Smaller wavelengths tend to add jittery noise to the particles if they are moving too fast. The same problem can occur if you rotate a turbulence field if the particles are too far from the center of rotation. In general, you want to move the turbulence slightly faster than the wind force, with the lower frequency (the larger wavelength) moving the fastest and the higher frequency moving slower. Using multiple frequencies of turbulence is a good practice because it helps break up any obvious uniformity in the motion of the particles and gives you varying levels of detail.
252
C H A P T E R 8 • Complex Particle Systems
Figure 8.33: A frame of the crestMist simulation showing the separating gap between the mist and the top of the wave
Figure 8.34: A frame of the crestMist simulation showing the separating gap fixed
The Need for Custom Fields About a month into the R&D for crestMist, we decided to try applying our current solution to a shot with a camera low in the trough looking up at the crest of a wave. This is when we discovered how sensitive the simulations would be to the view from which we saw them. From low in the trough, you could clearly tell that the mist was separating from the top of the wave. This was due in large part to the size of these enormous waves and the speed at which they rose and fell. Doing some rough calculations, we found that an 80-foot wave would actually travel forward at 60 miles an hour. If you were floating in the water as the wave passed underneath you, you would be dropped at the rate of almost G=32 ft/sec2, which is similar to a freefall. Once the particles were emitted for the mist at the crest of the wave, they quickly found themselves hovering above a large gap as the water dropped away underneath them, especially if the wind was blowing in the opposite direction of the wave travel. You can see this problem in Figure 8.33 where there is a gap between the top of the wave and the mist particles seen in white. Even when the wave and wind direction matched, we continued to have this problem. When the mist was created at the surface of the water, it had to slowly accelerate to the speed of the wind, and so for the short period of time that the mist stayed near the surface, it fell way behind the crest of the wave. After several brainstorming sessions, we created a custom field that sensed the vertical travel of the ocean surface and transferred that to the particles to which it was connected. We needed to add many parameters to ensure that particles could eventually escape its grasp and otherwise maintain the motion of the uniform wind and turbulence forces. The custom field had resolved the separation problem, as seen in Figure 8.34, where the particles stay with the crest of the wave and follow the rise and fall of the waves.
Advanced Techniques
253
254
C H A P T E R 8 • Complex Particle Systems
Figure 8.35: A CG mist element that used 10 times the number of partides in a tornado from Twister
Rendering Techniques Simulating particles is one-half of the battle; rendering them is the other half. When you render particles, you can choose from many options. You can render particles within Maya itself, or you can export them and render with other vendor software such as RenderMan and Mental Ray. If you have sophisticated programmers, you can even write your own custom particle-rendering software, which is what we did at ILM for the movie Twister. Later we improved the renderer to handle the extreme particle counts in The Perfect Storm. Within Maya is the option to use the hardware renderer, which is great for light dust elements and gives you nice sub-frame motion blur, or you can use the software renderer, which can create effects such as volumetric clouds and blobbies. Maya 4.5 has a whole new fluid dynamics engine that has its own special shaders for rendering smoke and gaseous elements. To render the mist you see in Figure 8.35, we quickly found out that we needed 10 times the number of particles that it took for a tornado of about the same size on screen in Twister. This situation was unexpected and was due to the nature of mist. Individual droplets had to be subliminally visible while looking gaseous. The finer the mist, the more particles we needed to make them visible. Another aspect of rendering the mist was to keep the solidity and density at its root where it rises from the ocean foam. Figure 8.36 shows an element as seen from above; each small wavelet at the crest of the wave creates its own dense crestMist. Frequently, water splashes are not just white blobbies smeared by motion blur. There is a lot of detail and specular highlights from the drops of water as they are exposed on film during the shutter open phase. If you study photographs of splashes when a light source such as the sun or a spotlight is present, you'll notice the following: • Drops rarely appear as a perfectly straight line. They usually appear as a curved line that is part of the parabolic trajectory they are traveling along. • There are many bright highlights along the curve, and the edges are rough and uneven. The overall color appears gray or even transparent, depending on the amount of foam within the droplet.
Advanced Techniques
Figure 8.36: A CG particle rendering of a crestMist element. The camera is looking down into the ocean surface. You can observe this phenomenon by watching water spray out of a garden hose on a sunny day or by watching water spray from a showerhead. Sea water tends to have a lot more foam content, which makes the water droplets more murky and white and reduces the transparency. The specular highlights are still brighter than the overall color of the foamy droplet. Figure 8.37 shows an example of how we developed custom rendering tools to be able to represent this kind of splash. In some cases, we used this technique when rendering mist, for example, when a search light was aimed at the mist. You can use the same method to create rain. The only difference is that you won't need to curve the lines unless there is severe turbulence.
Figures 8.38 and 8.39 show a comparison of a real and a CG-rendered splash. Figure 8.39 shows a real splash shot on stage where all the detail of foamy airborne water in motion can be seen. By studying these real water elements, and knowing we had to create CG splashes that would be seen either side by side with the real splashes or in a neighboring shot, we sought to match the look. Figure 8.38 shows the result of this work in a CG splash element from a shot that was similar to the real splash reference. Next to crestMist, simulating and rendering these thick water splashes was the most difficult task on the project.
255
256
C H A P T E R 8 • Complex Particle Systems
Figure 8.37: A CG rendering of a water splash, showing the curved shapes of the water droplets and the specular highlights along each drop
Figure 8.38: The CG splash
Creating the Button
Figure 8.39: The real splash element dropped from a 2500-gallon tank on stage (from John Frazier's special effects team) You need to focus attention on detail in reference images and the overall impression of the item you are trying to replicate. To replicate a natural phenomenon, you need the essential ingredients, regardless of how impressionistic they are; you don't need every single detail. For example, a totally photorealistic painting looks fake on film, whereas just the right amount of subtle impressionistic representation of the colors in the painting will look real on film. Creating rough seas is difficult because of the transitions and transformations. Water goes from being a discrete liquid surface to foam, to a splash, to blown mist, and finally to a disappearing vapor. Representing all these elements in a single automatic process is desirable but difficult. Figure 8.40 shows the culmination of the efforts to create the crestMist element and have it appear natural and full of life. You can see this element in motion by viewing the m r l 2 6 b 0 2 4 _ c r e s t M i s t . m o v file on the CD-ROM.
Creating the Button So it's time that reality caught up with the myth in the media that all you have to do is push the button and the computer does the rest! Although we all know that the button will always elude us, the shelf buttons in Maya at least start us on that path. Making it as easy as possible to set up and use an effect will help you get better-looking shots because there will be more time for refining them. Of course, this is relative to the level of difficulty of what you are creating. More complex tasks are naturally more difficult to tame and simplify, and they require more knowledge from the crew.
257
258
C H A P T E R 8 • Complex Particle Systems
Figure 8.40: A frame of the CG-rendered crestMist element from a final simulation
Figure 8.41: The shelf for The Perfect Storm, showing some of the buttons we used to add effects of various kinds to the scene
A button can execute MEL commands, load plug-ins, and create connections between nodes. You might be able to contain all the necessary commands to set up a scene from scratch within the button, but an easier solution is to simply import a "generic" scene file that you have previously created with everything you need already in it, make the necessary connections, and allow the user to customize the settings for their shot. No doubt you will end up with many buttons no matter how small or large your project. Its useful to give all the custom show buttons a home on the same shelf, as we did for the Storm shelf in Figure 8.41. There were buttons to create oceans, buttons to browse through the 80 different oceans, buttons to make foam, and buttons to create crestMist. Many artists contributed to the show shelf, each being responsible for their own unique, all-powerful button, and they took pride in making their button work efficiently and consistently for the rest of the setup.
Testing and Documenting Before Postproduction Not all R&D projects are completed before postproduction. For crestMist in The Perfect Storm, the process took several months because of the complexities. Once the setup is complete, however, it is essential to test and package the solution in such a way that the button
Final Result
Figure 8.42: The original background plate showing the full-size replica of the Andrea Gail controlled by a giant submerged gimbal system built by]ohn Frazier's special effects team or plug-in can be distributed to a large crew with minimal support required. The easier it is to use the setup and the clearer the documentation, the less support a large crew needs, and the faster the work can be completed. It is always difficult and tedious to write good documentation, but each well-documented feature is one less question needing to be answered. The larger the crew, the more this can add up and the more use the documentation will get. And that's the real payoff for creating thorough documentation.
Final Result After all the hard work, now you get to sip coffee while clicking the crestMist button and voila! Well.. .maybe it's not quite that easy, but, hey, these aren't easy elements. It's amazing to be able to replicate something complicated in nature, make it fully controllable, and at the same time make it relatively easy to use. The crestMist element turned out to be difficult, and it wasn't always the most obvious part of a shot, but it clearly made a difference in helping the audience feel the power of the storm in every shot. Figure 8.44 shows a frame from a shot in the rogue
259
260 C H A P T E R 8 • Complex Particle Systems
Figure 8.43: A wipe showing the ocean wireframe and the rendered ocean with simulated foam and crestMist blowing off the top of the wave
wave sequence of the film. The boat was the real replica of the Andrea Gail shot in the tank in front of a blue screen, seen in Figure 8.42, and most of the water surrounding it was replaced by the CG ocean and crestMist in Figure 8.43. You can see the subtlety of the mist but also its importance in the final frame.
Summary We have covered many aspects of the vast area of particle simulation, from gathering reference to preparing effects to be used by many people on a large scale project. Technology is changing rapidly, and many new tools are available to help us beat Mother Nature and capture even the slightest glimpse of her spontaneous beauty. Particle simulation plays a big part in creating effects while maintaining full control over them—essential in production. Directors are always going to want the latest never-before-seen effects, and effects animators are going to find more creative ways to use complex particle systems to meet the director's
Summary
Figure 8.44: The final composite showing the finished shot with CG ocean extension, crestMist, sky, and haze
vision. New fluid dynamic simulations are also going to be added to the mix of tools. Using particle dynamics is very much a Zen art in which you have to learn how to work with the forces and not fight them. Many things happen by accident in dynamics, and the techniques covered in this chapter will hopefully help you diagnose the bad accidents and keep the good ones.
261
Creating a Graphic User Interface Animation Setup for Animators Susanne Werth
of producing works such as Antz, Shrek, Toy Story, Final Fantasy, and Monsters Inc. as well as a multitude ofCG television shows (mostly for the children's market) increases, the time allotted to production continues to decrease. Thus, it is critical to develop methods that ease CG production workflow. Companies such as PDI and Pixar write their own software packages, or plug-ins, attempting to reduce costs by optimizing off-the-shelf tools to more efficiently do specific jobs. The primary goal of any animation tool is flexibility. Maya offers great flexibility in its graphic user interface(GUI). For specific production environments, however, it is often expedient or even necessary to use Maya Embedded Language (MEL) to create new interfaces for modelers, animators, and others in the production pipeline. In this chapter, I'll introduce you to MEL and show you step by step how to create a GUI for animators. MEL is a scripting language with similarities to C/C++. But even with no programming experience, you can create scripts that ease your work with Maya. I'm not going to start from scratch and explain all the MEL basics. Rather, I'll explain the procedure for creating a graphical user interface in a straightforward way so that beginners and advanced readers can get results quickly.
264
C H A P T E R 9 • Creating a Graphic User Interface Animation Setup for Animators
For a thorough introduction to Maya, see Maya 4.5 Sawy, by John KundertGibbs and Peter Lee, from Sybex, Inc.
If you have an idea for a script, the best way to begin is to check out the MEL community on the Internet for similar scripts. A good source is the MEL page of Highend3D (www.highend3d.com). You might find scripts you can use right away or scripts that will suit your needs with only modest adjustments. The idea for the animation script in this chapter was inspired by a tutorial by Ron Bublitz, who created a character picker for Maya 2.x. You can find the script at www. highend3d.com/maya/tutorials/ron1/.
Planning the GUI Before you start programming any GUI, you must figure out what the interface will look like, which options it will have, and so forth. You have several options. For example, you can choose a simple check box or radio button structure, text-based buttons, or even a picture of the character you want to animate.
• Planning the GUI
In a big production team that includes a lot of international people, I often use a picturebased GUI, and this is the interface I am going to explain in this chapter. The basic idea is that the animator can choose the handles of the character intuitively, by clicking an icon at the exact body position in the picture as the handle or animation control on the model. The first step is to render a picture of the character and print it or to sketch the character if modeling is not yet complete. Make a list of all the handles you want to include in your interface, and try to position them in your picture. You'll use this list later when you create the pattern of the icon positions. If you need to include a lot of handles, try to position them horizontally. That way you keep the number of icons small. The simpler the pattern, the less work you have to do to position the icons with the script. Some animation controls, for example, hand and facial controls, are better represented as sliders. You can include them in your character control GUI, but if you have a complex character, I recommend creating an extra GUI window for those parts. Here are some other questions to answer when planning your GUI: • Does your character setup include several skeletons that will be animated? • Must you include handles for a Mocap, Offset, IK, and/or FK skeleton? • Do you want the animator to be able to choose between these options? During the planning process, try to get as much input from the animators on your team as you can. Talk about their experiences In other projects, which tools they used, what they would like the tools to be able to do, and so on. The more Information you collect before you start scripting, the less time you lose making changes later.
In this chapter, I'll show you how to change between different skeletons while using only one interface. Last but not least, you must know if the GUI has to work for one or for several characters. In a big production, animators might need to control more than 10 characters. Creating a different GUI for each is too much work. You can, however, modify the interface so that the MEL script works with all 10 characters. Distribute versions of the GUI to the animators for testing as soon as possible. They are the ones who have to work with it, so try to meet their needs and get feedback as soon as you can.
Naming Conventions You can select a node in a hierarchy in different ways. You can address the node with the complete path or by its unique name. Figures 9.1 and 9.2 show two character hierarchies of the same skeleton structure. If there is only one character in a scene, it doesn't matter which method you choose. But if two characters share the same hierarchy structure in the skeleton, Maya will be confused if you do not use the complete path method. Figure 9.1 shows two models that have
265
266
C H A P T E R 9 • Creating a Graphic User Interface Animation Setup for Animators
figure 9.1: Two characters with the same hierarchy structure and no prefix
Figure 9.2: Two characters with the same hierarchy structure and prefixes
the same hierarchy structure. If you try to select the node r o o t _ h _ M of model p u p p e t _ G _ l by typing the following at the command line: select " r o o t _ h _ M " ; Maya responds with the following error message: / / E r r o r : l i n e 1 : M o r e t h a n one object m a t c h e s n a m e : root_h_M / / You must use the following full path of the node: s e l e c t "puppet_G_l p u p p e t S k e l r o o t _ h _ M " ;
As you can see, the script gets complicated and excessively long when using this technique. You can easily achieve the same result by using models like those shown in Figure 9.2. Each model has a prefix that runs throughout the hierarchy. Therefore, each node has a unique name, even if the skeleton structure is identical. To select the root handle of the first character with the name A D _ p u p p e t _ G requires nothing more than the following. select "AD_root_h_M":
With the prefix method, you can use the same skeleton to create different characters. To distinguish the characters, run a prefix, rename the script over the hierarchy structure, or while importing or referencing the character scene, follow these steps: 1. Choose File
Import or Create Reference
Planning the GUI
2. In the Name Clash options, set the drop-down boxes to Resolve [all nodes] with [this string]. 3. In the With field, enter your prefix, and be sure not to put in the underscore. 4. Set the File Type to the appropriate format (Maya Binary or Maya ASCII). 5. Click the Import button or click the Reference button and select the scene file. Maya imports or references the scene file and prefixes all the nodes without your having to do anything further. An underscore is automatically added between the prefix and the original node name. The prefix can be a complete name or only a letter. I recommend positioning identification tags at the beginning of a node, but you can proceed in several ways. Depending on your project or database structure, you might want to place letters at the end of the node name instead. A naming convention requires that all members of the team abide by the rules you establish, if the GUI setup must work with more than one character, all characters must be based on the same character setup and must have the same name in each node, aside from naming prefixes.
You can use the MEL command t o k e n i z e to search for specific letters in a string. The underscore in the name is used as separator to distinguish not only between characters but also between different versions of the same character. For example, you can give a low-resolution animation model the prefix _A_, and you can give a high-resolution model the prefix _H_. Regardless of the method, you can have different versions of a character in a scene and select a version with the GUI if your script can search for this attribute. In the GUI itself, you create the optionMenu that includes the choices.
Cutting the Icons Before you start programming, cut the rendered picture of your character into pieces. Use any graphics program that allows you to save bitmaps (. bmp). After planning where to place each icon or button, draw red lines with a width of 4 pixels and without anti-aliasing to define the size of the icons. The border around the picture has to be 2 pixels wide. (See Figure 9.3.) Try to keep the pattern simple so that only a few graphics have to be loaded. Once you finish your pattern design, print a picture of it. The next step is to cut out the parts between the red lines. As you save each icon as a bitmap file, copy the name of the icon file (see Figure 9.4) and the size of the bitmap into the printed image. You need this information later for positioning the icons in the formLayout. The next step is not essential for the function of the script, but convenient for the animator. The idea is to make it visually clear that an icon is selected. The icon types Maya offers don't always show a difference between their pressed and unpressed state. Therefore, you take the image of the icon, colorize it, and save it with the same name plus the postfix t to identify that this image is used in "selected" or "ticked" status. Later in the script, we'll search for the t in the name of the image and update the script according to the selection status.
267
268
C H A P T E R 9 • Creating a Graphic User Interface Animation Setup for Animators
Figure 9.3: Defining the size of the icons
Figure 9.4: Naming the icons
Creating the Window If you have scripted before, you know that the first lines of the main GUI routine declare the window, as in the following: The code examples in this chapter don't provide functionality as a standalone.The code might be incomplete or vary from the sample script. For testing the code examples, use the sample script from the CD.
w i n d o w -w int -h int -t " w i n d o w N a m e d i s p l a y e d " w i n d o w N a m e ; / / d e c l a r e l a y o u t s , i c o n s , b u t t o n s , etc showWindow windowName;
This code creates the window when the script is called, but Maya doesn't delete the window if you don't tell it to do so. By using the following if command before the window is created, you can ensure that you always have only one window of the same script running if the window with the name windowName exists (- ex): if
( ' w i n d o w -q
-ex
windowName'==true)
deleteUI
windowName;//queries
(-q)
Creating the Window
When the if command returns true, the d e l e t e U I command deletes the window before creating a new one in the following line. The actual code in the sample script at the end of the main procedure is: if ( ' w i n d o w -q -ex CharWindow' == true) deleteUI C h a r W i n d o w ; w i n d o w - w 500 - h 500 - t " C h a r a c t e r GUI" C h a r W i n d o w ;
and showWindow CharWindow;
At the end of the script, you call the main procedure, in which the window is created. By calling the main procedure inside the script, you don't have to type that procedure name every time after sourcing it. Especially during the debugging process, saving these keystrokes can be helpful. Another possibility is to create a MEL shelf button that sources the script and calls this procedure in one pass. Creating the Layouts As I mentioned earlier, in the planning process you make decisions about the look of your GUI. For example, will it be only a picture of the character or include buttons, tabs, and an Option menu? The more complicated the look, the better it is to start with a formLayout in which to position the layouts you need. Using a formLayout gives you a flexible basis for making changes later. First, declare the formLayout: string
$stringName =
'formLayout
formLayoutName';
in the example script: s t r i n g $form = 'formLayout b a s e F L ' ;
Declare every layout with a unique name so that you can easily address a layout later. This is important if you create other windows that act on data from your GUI, either by query or edit. Second, declare all the children layouts you want to use, before you edit the formLayout, as shown in the following: 1. s t r i n g $ t a b s = ' t a b L a y o u t - i m w 5 - i m h 5 b a s e T L ' ; 2.
setParent $form; 3. 4. s t r i n g $ h e a d = ' r o w C o l u m n L a y o u t -nc 1 -cw 1 300 o p t i o n R C L ' ; 5. setParent $form; 6.
7. 8. 9. 10. 11. 12. 13.
string $bottom='rowLayout -nc 4 buttonRL'; formLayout -edit -attachForm -attachForm -attachForm -attachForm
$tabs $tabs $tabs $tabs
"top" "left" "bottom" "right"
30 0 30 0 continues
269
270
C H A P T E R 9 • Creating a Graphic User Interface Animation Setup for Animators
14. 15. 16.
- a t t a c h F o r m $bottom "bottom" 0 - a t t a c h F o r m $ h e a d "top" 0 $form;
You can think of a GUI layout structure as a hierarchical structure. On top, you have a formLayout, in the example named with the string $ f o r m . At the next level, you declare the tabLayout baseTL, the rowColumnLayout optionRCL, and the rowLayout buttonRL as children of the formLayout baseFL side by side. When you declare the children, you must know the level to which they belong. In lines 2 and 5, you specify the level by telling MEL which parent the child layout has. In the example, you set $ f o r m as parent. Another possibility is to jump up one level in the hierarchy by typing the following: setParent
..;
Addressing by the c o n t r o l n a m e will also work. For example, use s e t P a r e n t b a s e F L instead of the statement in lines 2 and 5. Next, you edit the top formLayout by telling its children which position they have within the main window. I used the - a t t a c h F o r m flag to give them a precise positioning that doesn't change even when the window is resized. The formLayout is the most powerful but complex layout type MEL offers. We'll look at it in detail later in this chapter. The formLayout type of layout will not work on Its own to create a layout for your window. Likewise, it is extremely important to let MEL know that the sublayouts (such as rowColumnLayout) have the formLayout as their parent (using the s e t P a r e n t command or the parent flag of the child control/layout), lf not, the script generates an error when It attempts to build the window. Because MEL is fussy about exactly how layouts are related, you might want to look at the MEL reference for more on parenting layouts.
Creating the optionMenu Now that you have declared the area in which the optionMenu will be placed: s t r i n g $ h e a d = ' r o w C o l u m n L a y o u t -nc 1 -cw 1 300 o p t i o n R C L ' ;
you can declare the optionMenu itself. To find all characters in a scene, you have to tell Maya to search for something specific. With Maya 3 and later, you can create objects of type character. If you use Mocap and the Trax Editor, the character object is already created as part of your workflow. If not, you might want to consider creating a character object inside each character file in your setup stage. In this way, the character object is loaded automatically with each new character model in the scene. The character object should have the prefix name you want displayed in the selection list, because you are going to search for these object names in the script. Of course, you can search for animatable characters in a scene in other ways. For example, you can search for main nodes and compare the result with an array of all possible character names. Or you can list all top-level transform dag objects that are visible: s t r i n g $ l i s t [ ] = ' 1s - as - v' ;
Creating the Window
This list will contain all characters and set elements. If you distinguish the character with a prefix or postfix, you can tokenize each element of the list; if you get the results you search for, save it inside the $ c h a r L i s t that is displayed in the optionMenu. The following code shows the use of the object of type character. //
1. 2. 3. 4. 5. 6. 7.
--optionMenu list of all characters in the scene--
string $charList[], $char; S c h a r L i s t = 'ls -type "character"'; optionMenu -1 "Character: " -w 200 c h a r M e n u ; for($char in $ c h a r L i s t ) { menuItem -1 $ c h a r ; [
In this example, you first declare an array S c h a r L i s t [ ], which you need as a place to store the result of the 1 i s t command (line 2). Because you don't know how many characters might be found, leave the length of the array open so that your computer doesn't reserve more memory than is needed. The second variable $ c h a r is needed to run the for in loop (line 4). I prefer the f o r in loop declaration because it saves some code. The classical way is to declare integer variables that contain the size of the array and run through a f o r loop, which can look like the following: int $ a r r a y s i z e = s i z e ( $ c h a r L i s t ) ; int $ i ; for ( $ i = 0 ; $ i < $ a r r a y s i z e ; $i++) { menuItem -1 $ c h a r L i s t [ $ i ] ; )
The variable $ c h a r in the for in loop does the same: it takes on the value of the first array item, which is then shown in the GUI as a menu item with the name pulled from $ c h a r L i s t using the menuItem call, and it runs decreasingly until the end of the array. The f o r in loop is preferable to the second example because it runs through the list more elegantly. Line 3 of the previous code declares the optionMenu by specifying the width and label text. You can declare the command flag ( - c c ) here, executing the command specified as a string, as soon as an menuItem of the optionMenu is selected. See the script on the CD for an example. The menuItem itself is the text shown as an option in the selection list. Let's say you have a scene with the characters "Louis_A_v3" and "Fred_R_vl". When the script runs through line 2, the array $ c h a r L i s t contains the following elements: $ c h a r L i s t = { " L o u i s _ A _ v 3 " , "Fred_R_vl"};
In the first round through the loop, beginning in line 4, the variable $ c h a r has the value "Louis_A_v3". When the first round of the loop is over, the value of $ c h a r changes to the next array element, which is "Fred_R_vl", enters the loop in round 2, and creates a second menuItem and list element with the label "Fred_R_vl". After that, the loop terminates because the array S c h a r L i s t has no more elements. The loop runs as many rounds as elements exist in the array $ c h a r L i s t .
271
272
C H A P T E R 9 • Creating a Graphic User Interface Animation Setup for Animators
Declaring the Graphic Controls How you declare the graphic controls depends on the type you are using. MEL provides a variety of buttons, sliders, check boxes and icons. In this chapter's script, I use icons, which use images to define the state in which they are toggled. Image-based icons are probably the most work-intensive type to create in an interface. If you don't want to put too much effort into cutting out graphics, a version with simple buttons, sliders, and text labels might be better for you. The next three sections describe different possibilities for a layout. Check Boxes
One way to select animation controls is to picture the skeleton with check boxes. The formLayout gives you the necessary control over positioning these elements. Figures 9.5 and 9.6 show examples of interfaces that are realized with check boxes. In Figure 9.6, an embedded image helps identify the handles. You can use any kind of image—a drawing, a reproduction of the joints, and so on. Just use your imagination. For each check box, you must declare a variable of type string, as in the following code. 1. s t r i n g $ a 3 = ' c h e c k B o x 2. -w 12 -h 12 - v i s 1 3. - a n n " h e a d" 4. 5. 6.
-onc "getonc(\"head_h_M\")" -ofc "getofc(\"head_h_M\")" head_h_M';
Figure 9.5: A sample check box layout without a background picture
Figure 9.6: A sample check box layout with a background picture
Creating the Window
Line 2 specifies the size of the check box. If you don't want an extra border around the check box, use a size of 12 x 12 pixels. Otherwise, increase the values of the width ( - w ) and height ( - h ) flags. Even if the visibility is enabled by default, you must establish a visibility flag. Because you position the fk and ik handles on top of each other, you switch visibility on and off depending on the f k _ i k attribute value. The annotation in line 3, defined by the flag - a n n , is a little pop-up text box, with the text in quotation marks. This text is " h e a d " for the handle "head_h_M". An annotation such as this helps the animator select the correct control, especially if the controls are close together. In addition, annotations help you debug mistakes in the declaration of the icon positions, because the icons won't end up in the correct position if there's a mistake. Sometimes they end up on top of each other in the upper left corner. Using annotations, you can quickly analyze the icon name by moving the mouse curser over the misplaced icon. Lines 4 and 5 declare commands that will be called when the check box is turned on ( - o n c ) or off ( - o f c ) . You can either enclose all the commands in quotation marks separated by semicolons, or you can call a procedure that contains these commands. Line 6 contains the name of the check box. No matter which icon, button, or check box type you use, be sure to name each one. When you run the script, the icon and button attributes are changed by some procedures ; therefore, you need the unique names of these elements. If you don't create meaningful names, Maya creates names such as c h e c k b o x l , c h e c k b o x 2 , c h e c k b o x 3 , and so on. Maya counts serially numbered from the start of Maya and not from the pop-up of each window, which means that you can't count the number of check boxes in a window and draw a conclusion about the number of a check box. What you're looking at might not be the first window created. To avoid confusion, create useful names from the beginning. I'll discuss the setup of the procedures g e t o n c and g e t o f c later in this chapter.
Radio Buttons If you decide to use radio buttons, don't forget that a radio button toggles states automatically. Thus, you can't select more than one radio button at a time. MEL uses the flag - a d d with the s e l e c t command. The check boxes allow multiselection. If you toggle the selection, only one element at a time can be selected, and the previous selected element of the active list is replaced by the currently selected element. Only one radio button of the radio button group can be selected. Figure 9.7 shows an example radio button layout. You might want to use radio buttons if you want to let the animator choose between the additive and the toggle selection method. To provide your animator(s) with both in the same interface, create a control to change between check boxes and radio buttons. Using an u p d a t e procedure, you provide the interface once with icons or check boxes. If the animator selects the toggle method, you display the radio button version. Of course, you can also provide a toggle selection with icons and check boxes, but by providing radio buttons you underline the method visually.
273
274
C H A P T E R 9 • Creating a Graphic User Interface Animation Setup for Animators
On the CD-ROM, you'll find the radio button layout shown in Figure 9.7 as c h a r G U I _ r a d i o . m e l . lt works with the w o o d e n D o l l . m b file to select handles one at a time. Selection is the only functionality of this script. You can use it as a start and then embed more functionality.
Icons
The following icon types are available in Maya: iconTextStaticLabel A static label that is not useful for our purposes, because it can't be connected to a command. iconTextRadioButton An icon with text and/or images that behaves like a radio button. If you mix different icon types and want to use the radio button behavior for a control that should be turned off if the state of another icon is turned on, consider that the pressed state of the radio button can only be toggled by another icon or button with radio button behavior. iconTextButton An icon with text and/or images; a command can be executed when the control is clicked or double-clicked. The visual state of the control doesn't change to reveal its state unless you program an i m a g e e x c h a n g e procedure that is executed when the iconTextButton is clicked. iconTextCheckBox An icon with text and/or images; a command can be executed when the iconTextCheckBox is turned on and another when it's turned off. The state on or off is visually emphasized and can be increased by an i m a g e e x c h a n g e procedure. symbolButton A button with an image and a command that can be executed when the symbolButton is clicked. symbolCheckBox A symbolButton that behaves like a check box. The advantage of the symbolCheckBox is that you can specify an image and/or command for the status on and off, enabled and disabled. Figures 9.8 and 9.9 show the Maya icon types. Theoretically, you can use all kinds of icons or symbol buttons, except the iconTextStaticLabel. Only the look and functionality you want to create is important as you decide which call(s) to use. Using specific icon or button types for certain functionalities makes the functionalities easier to recognize.
Figure 9.7: A sample radio button layout
Creating the Window
Figure 9.8: The Maya icon types in their unpressed state
Figure 9.9: The Maya icon types in their pressed state
In this chapter's script, I use the iconTextCheckBox. The following code shows an example of the declaration of each icon's functionality. 1. 2. 3. 4. 5. 6. 7. 8. 9.
s t r i n g $ i m a g e P a t h = "F:/Sybex_Maya_Buch/picis/"; s t r i n g $b10a = 'iconTextCheckBox -st " i c o n O n l y " - i l ($imagePath+"b10.bmp") -w 13 -h 12 - v i s 1 - a n n "IK_leftArniPoleVector" -onc "getonc(\"ikPoleArm_h_L\")" -ofc "getofc(\"ikPo1eArm_h_L\")" ikPoleArm_h_L';
Positioning the Icons Positioning the icons inside the formLayout can be time consuming. Before you begin, you must understand how the icons are displayed. The following code shows you some ways in which you can attach an icon to the layout. You address an attachment type to each side of the icon: "top", " b o t t o m " , "1 eft", and " r i g h t " . 1. f o r m L a y o u t - e d i t 2. - a t t a c h F o r m $b2 "top" 5 3. - a t t a c h F o r m $b2 " l e f t " 5 4. - a t t a c h C o n t r o l $b2 " b o t t o m " 5 $b3 5. - a t t a c h P o s i t i o n $ b2 " r i g h t " 5 5 0
275
276
C H A P T E R 9 • Creating a Graphic User Interface Animation Setup for Animators
In line 2, we attach the top side of the icon $ b 2 5 pixels from the top side of the formLayout. In line 4, we attach the bottom side of the icon $ b 2 5 pixels from the top side of the icon $b3. The a t t a c h P o s i t i o n option gives you the option of attaching the icon according to the number of divisions you specified across the form. If you choose 100 divisions, position 50 is 50 percent. The use of division numbers can also provide absolute positioning if you are using a square picture. To achieve this, choose the same number of divisions as the width and height of the picture in pixels.
Nonsquare pictures, such as those used in the sample script ( c h a r G U I _ i c o n t e x t c h e c k b o x . m e l on the CD), force relative positioning if you want to use division numbers for positioning the controls in the f o r m L a y o u t . For example, the picture has a width of200 pixels, but a height of 250 pixels. By choosing a division number of 200, we get absolute position horizontal. The value 50 for the left side of the control would be an absolute positioning of 50 pixels from the formLayout border, but vertically, the value 50 provokes a relative position for the top of the control of 20 percent.The previous code example shows the attachment of all four sides of the icon $b2, but you need to attach only two sides to anchor an icon. If you attach a side to nothing, you use the - a t t a c h N o n e flag. In Maya, you can attach a control in a formLayout in six ways. See the Maya Help documentation to find out more about them. Relative Positioning Relative positioning is great if you want two icons or controls to stay side by side or in a specific order even when the window is resized. The following flags - a t t a c h C o n t r o l [iconname] " s i d e n a m e " offset [iconname] - a t t a c h O p p o s i t e C o n t r o l [iconname] " s i d e n a m e " o f f s e t [iconname] - a t t a c h P o s i t i o n [iconname] " s i d e n a m e " o f f s e t
are primarily used in a relative way, for example: - a t t a c h C o n t r o l $b " l e f t " n $a - a t t a c h O p p o s i t e C o n t r o l $b " r i g h t " n $c
When you use relative positioning, the position of icon $b is defined by the position of icon $a and/or $c. The offset specifies the distance of $b from $a and $c. The flag - a t t a c h C o n t r o l aligns the left side of icon $b to the right side of icon $a with an offset of n pixels. The flag - a t t a c h O p p o s i t e C o n t r o l aligns the right side of $b to the furthest side of icon $c with a distance of n pixels. The online MEL documentation provides a detailed explanation of the a t t a c h flags and includes several examples to help you understand the function of these flags. In the MEL UserGuide, go to Chapter 13: Creating Interfaces, Ul Elements, and scroll to the explanation of formLayouts.
If you position the icons in a relative way, you must cut up the background image completely. Depending on the number of handle controls in the model, this could be a lot of work.
Creating the Window
277
Sometimes it's tedious to arrange the layout icons until they are all in place. It would be nice to be able to create absolute placement and reduce the number of icons that have to be saved, because only the position of the icons with functionality is needed. if you create a s p a c e . bmp file that has a size of1 to 1 pixel, you don't have to cut and save the icons without functionality. With the information about the extent of the area, you can use s p a c e . b m p as an image file instead, because these icons won't be shown anyway (the v i s i b i l i t y flag is set to 0).
Absolute Positioning As I mentioned, you can attach by position in an absolute way if the image is square, in combination with the declaration of a division number equal to the size of the image. But what can you do if the image isn't square, as is the case with our sample script? Fortunately, you can use the a t t a c h F o r m flag to create absolute positioning anyway. The idea is that you can position an icon by its corners. Because you can give an absolute position with the a t t a c h F o r m flag for all sides of the icon, you define the position of the corners. If you use a graphics program such as Photoshop to cut the image into icons, you might know that the width and height of an image is counted from the left top corner. The formLayout positioning works in the same way. (See Figure 9.10.) If you want to specify the position of the top left corner of an icon, you identify the position of the corner's left and top coordinates. Let's say we have icon $a3 in our sample script. When we cut out the icon of the rendered picture, we note the position information for one corner. We need only one corner to specify the position in the formLayout.
Figure 9.10: The direction of the measure of the pixels in an upside-down coordination system, such as is used in Maya and other graphic programs such as Photoshop
In your Info tool, be sure to set the units to pixels before you use it.
Positioning in the GUI works like an upside-down coordinate system. The values you get for the top left corner of icon $ a 3 are x = 82 and y = 20, which means 82 pixels from the left side of the formLayout and 20 pixels from the top: -attachForm -attachForm -attachNone -attachNone
$a3 " t o p "
20
$a3 "left" 82 $ a 3 " right" $ a 3 "bottom"
Specify the bottom left corner as follows: -attachNone -attachForm -attachNone -attachForm
[iconname] "top" [iconname] " l e f t " o f f s e t [iconname] " r i g h t " [iconname] "bottom" offset
278
C H A P T E R 9 • Creating a Graphic User Interface Animation Setup for Animators
Specify the top right corner as follows: -attachForm [iconname] " t o p " o f f s e t -attachNone [iconname] " l e f t " -attachForm [iconname] " r i g h t " o f f s e t -attachNone [iconname] "bottom"
And specify the bottom right corner as follows: -attachNone -attachNone -attachForm -attachForm
[iconname] [iconname~\ [iconname] [iconname]
"top" "left" "right" offset "bottom" offset
lf you have difficulties imagining which side corresponds to which corner, think of the sides of the control as straight lines. At the points where they cut each other, yc define the corners of the control. The two straight lines that create this point are the sides you have to attach for this corner.
Procedures
Until now we have been concerned with creating the look of the GUI. We developed button: icons, sliders, and so on. However, the interface doesn't do much yet because we haven't tol it to run any commands when the interface elements are manipulated. You specify the function of the interface with the following procedures, which are called when an icon, a button a check box, or a radio button is clicked.
Updating the GUI The main procedure, C h a r G U I , designs the GUI window with all its layouts, buttons, icons, and the optionMenu. But thus far, when the window opens, we see only the names of the characters in the scene stored in the string array $ c h a r L i s t . The icons and buttons don't yet hold any functionality. To make them do something, we need an update procedure that read out the values of a character's attributes and changes the interface according to these values. Because we use the GUI with more than one character, this procedure is called every time a new character is selected. Therefore, we embed the procedure call in the command flag of the optionMenu, as follows: optionMenu -1
" C h a r a c t e r : " - w 200 - c c " u p C h a r G U I ( ) " c h a r M e n u ;
When the user selects a character of the optionMenu, the command flag - c c calls the procedure u p C h a r G U I ( ), to update the interface according to the selection. The first value we need is the prefix of the selected character. We query it on the optionMenu and save the result in the variable $prefix of type string:
This simple query code works only because our sample file w o o d e n D o l 1 . m b contains a character object with the prefix AD as name and a strict naming convention in the file. If you don't search for character objects, you must parse the result of the 1 s command executed
Procedures
earlier in the declaration of the optionMenu, to find out the prefixes of the characters before you save them into the array S c h a r L i s t . You can also parse inside the u p C h a r G U I procedure. For example, you don'twant to present a cryptic declaration in the selection list of the optionMenu, such as AD_H_vl.8_250101. Instead, you program the display of more meaningful text, such as HighRes Animation Doll version 1.8. In such a case, the text labels don't match the prefix identification of our model naming convention. Second, AD_H_vl.8_250101 is displayed in the selection list, because it's the name of the top node of your character or character object. But in the naming convention of your model, only the prefix AD_H follows down the hierarchy tree. In this case, you need to parse again using the t o k e n i z e command. Step two is to find out which animation mode is selected. In our example model, we provide only the arms, with a choice for an IK or an FK animation mode. The more possibilities your model offers—for example, a choice as well for the legs, or the possibility to switch Mocap on or off—the more attributes you need to read out in the update procedure. In our wooden doll scene ( w o o d e n D o l 1 . m b on the CD-ROM), the switch attribute is included with the rotation handles of the hands. The attribute itself shows the driven key values 0 and 1 in the Channel box, but you can change the driven key to a higher value, such as 10, and blend between IK and FK. The value 1 is for FK, and 0 is for IK. The idea behind the query is that you can read the value of the i k _ f k attributes with one look at the check boxes in the GUI and change them at this location if needed. At the same time, we are going to update the check box states every time an animator changes the values by using the Channel box. But first we have to edit the check box behavior according to the selected character. You can't edit the commands by turning the check boxes on or off in the main procedure, because you need to know the prefix for identification. And with each change of the character selection, the check boxes must be updated. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13.
checkBoxGrp -e -onl ("checkBoxGrp -e -v2 false t o g g l e L ; setAttr "+$prefix+"_hand_h_L.ik_fk_L 0") -on2 ("checkBoxGrp -e -vl false toggleL; setAttr "+$prefix+"_hand_h_L.ik_fk_L 1") toggleL; checkBoxGrp -e -onl ("checkBoxGrp -e - v 2 false t o g g l e R ; setAttr "+$prefix+"_hand_h_R.ik_fk_R 0") -on2 ("checkBoxGrp -e - v l false t o g g l e R ; setAttr " + $prefix+"_hand_h_R.ik_fk_R 1") toggleR;
Lines 1 and 6 specify which c h e c k B o x G r p to modify. The next step is to give the c h e c k B o x G r p a radio button behavior and update the attribute value according to the selection. The c h e c k B o x G r p provides detailed specification for the execution of on and off state commands for each c h e c k B o x inside the group. In the previous code, the on state command is chosen. By setting the value 2 to false, we turn off the second check box in the c h e c k B o x G r p , when the first check box, which represents the IK value, has been turned on ( - o n l ) . At the same time, the i k _ f k attribute is set to 0, which means IK. With the second check box, we proceed vice versa. In lines 8 to 13, we edit the right arm of the model in the same way.
279
280
C H A P T E R 9 • Creating a Graphic User Interface Animation Setup for Animators
lf you prefer the look of radio buttons,this situation Is ideal for them.You don't the command to provide radio button behavior, and you save some lines of code.
After the editing, we want to read out the i k _ f k attribute values of the model and update the check box accordingly. int $ i k v a l _ L = 'getAttr ($prefix + " _ h a n d _ h _ L . i k _ f k _ L " ) ' ; int $ i k v a l _ R = ' g e t A t t r ($prefix + " _ h a n d _ h _ R . i k _ f k _ R ' T ;
The next step is to describe a conditional action depending on the test conditions of saved attribute values, as shown in the following: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
if ( $ i k v a l _ L == 0) I checkBoxGrp -e checkBoxGrp -e I else { checkBoxGrp -e checkBoxGrp -e )
- v l true t o g g l e L ; -v2 false toggleL;
-vl f a l s e toggleL; - v 2 true toggleL;
With the test condition $ i k v a l _ L == 0, we tell the if conditional statement to execute the commands enclosed in braces when the i k _ f k attribute value is set to 0. In that case, w turn on the IK check box and turn off the FK check box. We save the conditional action to update the status of the check boxes as its own procedure with the name u p C h e c k s . To ensure that the check boxes are updated when the attribute values are changed through the Channel box, you create a scriptJob. This s c r i p t J o b MEL command offers th observation of the attribute values and executes a command we specify if a condition, an event, or an attribute change comes true. ScriptJobs can slow down your system, if they aren't killed through a parent Ul object or by their job number, they can run forever.
In the u p C h a r G U I procedure you write: scriptJob -p C h a r W i n d o w - a c ($prefix+"_hand_h_L.ik_fk_L") " u p C h e c k s " ; scriptJob -p C h a r W i n d o w - a c ($prefix+"_hand_h_R.ik_fk_R") " u p C h e c k s " ;
With the flag - p / p a r e n t , we tell the scriptJob to terminate when the UI element C h a r W i n d o w is deleted. This ensures that no scriptJob continues when you close the Character GUI. With the flag - a c / a t t r i b u t e , we tell the scriptJob to observe the value of the i k _ f k attributes of the selected character. If this condition comes true, the procedure u p C h e c k s is executed, and the check boxes are updated to the new value. Because the s c r i p t J o b is called in the up C h a r G U I procedure, every time you change to another character, which calls u p C h a r G U I , you create a s c r i p t J o b according to the selected
Procedures
character. But the old s c r i p t J o b of the previous selected character isn't terminated and consumes system capacity until the C h a r W i n d o w is deleted. Imagine five or more characters in a scene: as you change animating between them, you can easily produce 10 or more scriptJobs running at the same time. But only the two scriptJobs of the currently selected character are useful. All others just slow down or even crash your system. Therefore, terminating all running scriptJobs before you create two new ones for each new character assures a safe workflow. When the user selects a new character, the following line kills all scriptJobs: scriptJob - k a ;
Switching between FK and IK Icons The next step is to update the visibility of the FK and IK icons. You want to ensure that only the handles of the selected animation method are accessible. When models are complicated and allow animators a great deal of freedom—which means several handles for IK and FK or offset and Mocap—operating mistakes are inevitable. Thus, a GUI that controls the visibility of these tools can decrease the number of mistakes and can be a time-saving tool for the production. In each case, display only the icons that belong to the selected animation method—FK or IK in our sample script. The u p F K I K i c o n s procedure offers this feature, depending on the state of the IK and FK check boxes. string $ i k a r m c o n t r o l s [ ] = { " i k P o l e A r m " , " i k a r m " ) ; string $fkarmcontrols[ ] = { " f k s h o u l d e r " , " f k e l b o w " } ;
First, you declare the names of the IK and FK controls in a string array. Our example model provides two specific controls for IK and FK at the arms. But in your production, you might find other possibilities for legs, tails, and so on. Create an array for each part that can change between IK and FK states. In our example, we decreased the number of arrays by 50 percent, because we have a good and strict naming convention. This allows us to forget about declaring separate arrays for the left and right side. int $ i k v a l _ L = ' g e t A t t r ( $ p r e f l x + " _ h a n d _ h _ _ L . i k _ f k _ L " ) ' ; int $ i k v a l _ R = 'getAttr ($prefix + " _ h a n d _ h _ R . i k _ f k _ R " ) ' ;
Second, we get the attribute value of the i k _ f k attributes and save the integer value in new variables. Another possibility is to query the state of the IK and FK check boxes, which might look like the following: int $ s t a t e = ' c h e c k B o x G r p - q - v 2 t o g g l e L ' ;
If you look at the command descriptions of the c h e c k B o x G r p command, you see that the queried flag - v 2 (value2) is of type Boolean, which means true or false. But we save the value as an integer. This works because the Boolean value true can also be described as integer 1, and false as 0. So if the check box is turned on, we get the response 1, and if it is turned off, we get 0 as the result. You can't declare a variable of type Boolean, because MEL considers a Boolean value a constant variable of type integer with the keywords true, on, yes for 1 and false, and off and no for 0. The next step is to change the visibility according to the value of the attribute. If the variable $ i k v a l _ L has the value 0, IK is selected. In this case, we want the IK controls of the
281
282 C H A P T E R 9 • Creating a Graphic User Interface Animation Setup for Animators
left arm to be shown and the FK controls of the left arm to be hidden. The following code shows the necessary conditional actions and f o r loops to provoke this. 1. 2. 3. 4.
if ( $ i k v a l _ L == 0) ( f o r ( $ i = 0 ; $ i < s i z e ( $ i k a r m c o n t r o l s ) ; $i++) (
5. 6.
iconTextCheckBox -e - v i s 1 ($ikarmcontrols[$i] + "_h_L"); iconTextCheckBox -e - v i s 0 ($fkarmcontrols[$i] + "_h_L"); I
7. 8.
9. 10.
I
else
{
11.
for ( $ i = 0 ; $ i < s i z e ( $ i k a r m c o n t r o l s ) ; $i++)
12.
(
13. 14.
iconTextCheckBox -e - v i s 0 ($ikarmcontrols[$i] + "_h_L"); iconTextCheckBox -e - v i s 1 ($fkarmcontrols[$i] + "_h_L");
15. 16.
} )
17.
18.
if ( $ i k v a l _ R == 0)
{
19. 20.
for ( $ i = 0 ; $ i < s i z e ( $ i k a r m c o n t r o l s ) ; $i++) (
21. 22.
iconTextCheckBox -e - v i s 1 ($ikarnicontrols[$i] + "_h_R"); iconTextCheckBox -e - v i s 0 ($fkarmcontrois[$i] + "_h_R");
23. 24.
25. 26. 27. 28.
) I else (
f o r ( $ i = 0 ; $ i < s i z e ( $ i k a r m c o n t r o l s ) ; $i++) {
29. 30. 31. 32.
iconTextCheckBox -e - v i s 0 ($ikarmcontrols[$i] + "_h_R"); iconTextCheckBox -e - v i s 1 ($fkarmcontrols[$i] + "_h_R"); ) 1
In this code, you see the declaration of the arrays $ i k a r m c o n t r o l s and $ f k a r m c o n t r o l s we used earlier. The f o r loop in line 3 implicitly declares the index $ i . We specify that the loop runs through the array starting at the first position of the array, which is always the position array[0], until the last string in the array. To find out the number of strings in the array, you use the s i z e ( $ a r r a y ) c o m m a n d . Tell the loop to execute when the index $i is smaller than the size of the array. The symbols $i++ tell the loop to increase the index with 1 after it runs through each time. In our script, $ i k a r m c o n t r o l s contains two strings. Therefore, the s i z e command returns the value 2. We have $i with the value 0, and we check the test condition which is true, because 0 < 2. The first iteration of the loop creates the following statement in line 5: i c o n T e x t C h e c k B o x -e - v i s 1 ( $ i k a r m c o n t r o l s [ 0 ] + " _ h _ L " ) ;
Procedures
which means: iconTextCheckBox -e -vis 1 ("ikPoleArm" + "_h_L"); which again means: iconTextCheckBox -e - v i s 1 "ikPoleArm_h_L"; Now we increment the index $ i , which means $i = $i +1. Then we start to test the condition again: 1 < 2 is true. The statement changes to: i c o n T e x t C h e c k B o x -e - v i s 1 ( $ i k a r m c o n t r o l s [ 1 ] + " _ h _ L " ) ; which means: i c o n T e x t C h e c k B o x -e - v i s 1 " i k a r m _ h _ L " ;
We increment again and test 2<2, which returns false. Here the loop terminates, and the script continues in line 7. However, no more statements have to be executed in the if condition, so the next step is to run the e l s e condition. Because the test condition for $ i k v a l _ L has been 0, the e l s e statements are executed if $ i k v a 1 _ L != 0 is true, which means every value except 0. If your attribute defines the minimum as 0 and the maximum as 1 of type integer, no mistake can be made. But if you have higher values for the maximum or have defined the attribute as a float, you can blend between the two states of the attribute. If you decide to use a float, script an e l s e if condition to specify the exact value when the e l s e statements should execute. For example if you want to blend between 0 and 1, you might want to make the switch at 0.5. In that, case the else if condition looks like the following: else i f ( $ v a r i a b l e > = 0 . 5 ) The if condition looks like this: if ( $ v a r i a b l e <= 0 . 5 ) Lines 17 through 32 proceed in the same manner for the character's right arm.
Why Use Arrays? Why do we use arrays in this case? Some might think that the programming would be easier if we forget about relative programming and enter the absolute names of the controls we are going to change. That would mean that we don't need to declare the arrays and that we don't need any for loops that work off the array elements. Programmers aim for slim and flexible code structures for practical reasons. For example, if you program in module style, the idea is to reuse some modules in other projects. But in other projects, the values, which are our animation control names here, vary. If you script the actual names, you must change them all in every line you used them. In our case, the reason is similar. In big productions, the animation-tool scripting starts in parallel with the character setup. Both teams should work hand in hand, but, of course, changes are going to happen during the scripting stage. If you use arrays for declarations, you keep the number of changes to a minimum. Let's say that the name of the i k P o l e A r f T i _ h _ L changes to P o l e A r m I k _ h _ L or that you add a new handle. The only elements of the code you have to change are the arrays. The code in the previous example works no matter how many items are inside the array or what they are called. At least as long as you don't make a spelling mistake!
283
284 C H A P T E R 9 • Creating a Graphic User Interface Animation Setup for Animators
Switching between IK and FK icons prevents the animator from incorrect selection, which can occur when the animator selects the handles directly in the scene. To make sure this can't happen, modify the visibility of the animation control handles. One way to do so is to include this modification in the script, right after the visibility state of the icons is changed. You can do this by changing the visibility attribute of the handles or, if you combine the controls in layers, by switchingthese layers on or off. Use the s e t A t t r command to achieve visibility changes. setAttr "layerl.visibility" 0;
To be able to hide certain animation controls by manipulating their visibility requires a hierarchy structure that the sample file w o o d e n D o l l . m b on the CD doesn't provide. If you set the attribute of the r o o t . v i s i b i l i t y to 0, the legs, the arms, and the head also disappear. Feel free to adjust the model if you want to try out the explanations.
Exchanging Icon Images Now, we need to exchange the i c o n T e x t C h e c k B o x images. We produced icon images for the turned-off and turned-on state. The images for the turned-on state have the postfix _t. In the main procedure, we declared what happens if the i c o n T e x t C h e c k B o x is turned on. The procedure that's executed in that case is called g e t o n c ( s t r i n g $ c o n t r o l n a m e ) , in which we pass the information of the control name that is selected in the g e t o n c routine. In the procedure itself, we first query once again the prefix of the character. We then query the name of the image path and analyze the name of the icon. $imagepath =
'iconTextCheckBox
-q
-il
$controlname';
You can divide a string into pieces using the t o k e n i z e command. First, you divide by the symbol".". tokenize $imagepath " . "
$nodot;
With a path such as F: / S y b e x _ M a y a _ B u c h / p i c i s / d 4 . bmp, you get two parts: F : / S y b e x _ M a y a _ B u c h / p i c i s / d 4 and bmp.
The t o k e n i z e command returns an integer of the resulting parts, in this case 2. The parts are saved into an array specified at the end of the t o k e n i z e command. The next step is to divide the first part of the array by the symbol "/". The result is four pieces, of which only the last is interesting to us. int $ n o s l a s h n u m =
'tokenize
$nodot[0]
"/" $ n o s l a s h ' ;
Now you can edit the i c o n T e x t C h e c k B o x image by adding the postfix _t. Last but not least, select the selected item with the passed-on $controlname variable. s e l e c t - a d d ($prefix + "_" + $controlname);
To uncheck an icon, you use the same procedure, but tokenize one more step, by checking the last array item by the symbol "_". t o k e n i z e $ n o s l a s h [ $ n o s l a s h n u m - 1 ] "_"
$buffer;
If the resulting array matches "t" at the second position, you have proof that the previous image was of state on. If the command is embedded in the g e t o f c procedure that is
Procedures
executed only when the i c o n T e x t C h e c k B o x is turned off, you don't need this proof. To again display the unpressed icon image, edit the image path of the i c o n T e x t C h e c k B o x as follows: iconTextCheckBox -e - i l ("F:/Sybex_Maya_Buch/picis/" +$buffer[0] +".bmp") -v off $controlname; The flag - v / v a l u e is a special feature of the i c o n T e x t C h e c k B o x . It's a visualization of the pressed state, which looks as if the border is inlaid rather than raised — a fairly standard way of indicating that an item is pressed in a GUI interface. When you try out the scripts on the CD-ROM, remember that you must adjust the paths that lead to Images,if you don't, you'll get error messages and won't be able to see the images.
Creating the Select All and Deselect All Buttons The Select All and Deselect All buttons are essential for workflow with a GUI. However, you don't want to select every element in the scene, only the animation controls of the selected character. And from them only the controls of the IK arm or FK arm, depending on which animation method is selected. As before, you first need to query the up-to-date character and declare the arrays for IK and FK controls. All other controls that are never changed in visibility are displayed as body controls. s t r i n g $ b o d y c o n t r o l s [] = ( " h e a d _ h _ M " , " n e c k _ h _ M " , " c o l l a r _ h _ M " , "upperSpine_h_M","lowerSpine_h_M", "root_h_M","ikPoleLeg_h_L", " i k P o l e L e g _ _ h _ R " , "hand_h_L", "hand_h_R","ikfoot_h_L", "ikfoot_h_R","ikball_h_R", "ikball_h_l", } ;
Then, again you distinguish the conditional actions according to the i k _ f k values of the hand attributes. Inside the f o r loops, we call the procedure g e t o n c , which selects the items from the arrays, as shown in the following: int $ i k v a l _ L = ' g e t A t t r ($prefix + " _ h a n d _ h _ L . i k _ f k _ L " ) ' ; int $ i k v a l _ R = ' g e t A t t r ($prefix + " _ h a n d _ h _ R . i k _ f k _ R " )'; if ( $ i k v a l _ L == 0) ( for ( $ i = 0 ; $ i < s i z e ( $ i k a r m c o n t r o l s ) ; $i++) I getonc($ikarmcontrols[$i]+"_h_L"); ) } continues
285
286
C H A P T E R 9 • Creating a Graphic User Interface Animation Setup for Animators
else for ( $ i = 0 ; $ i < s i z e ( $ f k a r m c o n t r o l s ) ; $i++) ( getonc($fkarmcontrols[$i] + "_h_L"); ) if
($ikval_R
== 0)
{{ f o r ( $ i = 0 ; $ i < s i z e ( $ i k a r m c o n t r o l s ) ; $i++) ( getonc($ikarmcontrols[$i] + "_h_R");
) } else for ( $ i = 0 ; $ i < s i z e ( $ f k a r m c o n t r o l s ) ; $i++) {
getonc($fkarmcontrols[$i] + "_h_R"); } for ($i=0; $i<size($bodycontrols); $i++) ( getonc($bodycontrols[$i]); } The inverse of this process occurs in the d e s e l a l l ( ) procedure. Some might say that you could also simply deselect every object in the scene. You can do so by typing s e l e c t -cl ;
but this command, which is also executed when you click nothing in the scene, makes it impossible to deselect only one character while another character is still selected. For example, imagine two characters AD and WD. Both characters should make a parallel movement of the right arm. Let's say my first choice is the character AD. I decide to use the IK controls of the arm and therefore select the A D _ i k a r m _ h _ R handle. I want to set keys for both characters at the same time. In the optionMenu, I choose the character WD. The GUI updates itself, but the previous selection is still active. Now I accidentally select the wrong handle. If I click the Deselect All button, only the handles of the character WD are deselected. In the case of the s e l e c t - c l command, all handles are deselected, leading to wasted time and frustration. Maya provides a quick and easy way to achieve the default effect, so the Deselect All button of a GUI should offer something more than that. Updating the GUI By Not Using It The purpose of any good MEL script is to be flexible, supporting the freedom of animation methods Maya offers. Some animators select NURBS curve handles in the scene and translate and rotate them as needed. Others work only with the Channel box, the command line, and the middle mouse button. Still others prefer to select from a GUI. To support this freedom, the next paragraph explains how to make sure that the GUI is always up-to-date, even if the animator jumps between different Maya animation tools. The
Procedures
animator can select icons with the GUI, animate directly in the scene, selecting NURBS handles, and then jump back to the use of the GUI icons without a loss of information about the state of the character. To achieve the permanent information flow you create another scriptJob. This scriptJob is very time and capacity consuming, because it runs whenever the selection changes. You place it with the other scriptJobs inside the u p C h a r G U I procedure. scriptJob -p C h a r W i n d o w -e "SelectionChanged" "selicon";
The e x e c u t e d command is a user-defined procedure. The idea is that the icons of the controls selected by the animator are displayed in the pressed state in the GUI. In the procedure, you first read out all selected dag nodes of the active list. $dagnodes = 'selectedNodes
-dagObjects';
Be aware that character elements as well as every object on the active list are returned. The command gives back full path strings of the items. Next, you must check the size of the d a g n o d e s array. If the size returns 0, no elements are in the active list. This occurs when the s e l e c t -cl command has been executed. The scriptJob reacts to all changes in the selection, not just additions. If the active list is empty, all icons of the GUI should be in unpressed mode. Therefore, the u n t i c k a l l procedure is called. You can't use the g e t o f c or d e s e l a l l procedure, even though either procedure changes the icon states, because these procedures also include s e l e c t commands. If they are executed, the result can be an unterminated loop, and the system crashes. To avoid that, you create a new procedure, called u n t i c k a 1 l , to change only the icon states. If the active list isn't empty, run some t o k e n i z e commands to separate the control names from the hierarchy path. Divide first by " ! " and then analyze the last item of the array. Because of our naming convention, all animation control handles have the middle letter _h_ for handle. With the second t o k e n i z e command, you search for this item. for ( $ i = 0 ; $ i < s i z e ( $ d a g n o d e s ) ; $i++) ( int $buffernum1 = 'tokenize $ d a g n o d e s [ $ i ] " " $buffer1'; int $ b u f f e r n u m 2 = ' t o k e n i z e $ b u f f e r 1 [ $ b u f f e r n u m 1 - 1 ] "_" $ b u f f e r 2 ' ;
With another conditional action, you ensure that only the handles in the interface that really belong to the currently selected character are updated in the GUI. In the naming convention, we made sure that the h tag was at the second position from the right. Everything on the left of this tag could be character information such as prefix, version type, and so on. All this information should be part of the prefix, but might include the underscore. Because you divide the string by this symbol, you have to put what is on the left of the letter h together again and compare it with the prefix of the selected character. string $ p r e ="" ;
for ($j=0; $j<$buffernum2-3; $j++) I {
$pre += $buffer2[$j]; if ($j<$buffernum2-4) $pre += "_"; I if ($pre != Sprefix) u n t i c k a l l ;
287
288
C H A P T E R 9 • Creating a Graphic User Interface Animation Setup for Animators
If the test condition is false, you prove that the item in the active list belongs to the selected character. Next, you query the path of the icon image of the animation control if the second test condition else if ( $ b u f f e r 2 [ $ b u f f e r n u m 2 - 2 ] == " h " ) becomes true. To do that, you first put the control name together again: s t r i n g $ c o n t r o l n a m e = ( $ b u f f e r 2 [ $ b u f f e r n u m 2 - 3 ] +"_" + $buffer2[$buffernum2-2]
+"_"
+ $buffer2[lbuffernum2-l]);
You then query the full path into the string $ b u f f e r 3 : $ b u f f e r 3 = ' i c o n T e x t C h e c k B o x -q - i l $ c o n t r o l n a m e ' ; and isolate the image path of it: string $imagePath = getPath($buffer3); Divide the full path to find out the name of the icon file: int $ b u f f e r n u m 3 =
'tokenize
$buffer3
int $ b u f f e r n u m 4 =
'tokenize
$buffer4[0]
int $ b u f f e r n u m 5 = ' t o k e n i z e
"."
$buffer4'; "/"
$buffer5';
$buffer5[$buffernum4-1]
"_"
$buffer6~;
and edit the path and value of the control: iconTextCheckBox
-e
-il
($imagePath -v
on
+$buffer6[0]
+"_t.bmp")
$controlname;
The g e t h P a t h procedure mentioned earlier is slightly different from all the procedures presented so far. The string S i m a g e P a t h shows that the procedure returns a value like a queried command. If you want a procedure to return a string, float, or int, you must specify this type at the start of the declaration: g l o b a l proc s t r i n g g e t P a t h ( s t r i n g $ i m a g e P a t h )
The word s t r i n g after the word p r o c indicates a return value of type string for this procedure. Between the brackets, proceed as usual. At the end, specify the value you want to return by typing: return limagePath; As soon as the script passes a return command, it jumps back to the line it has been called and takes the value into the source procedure. Therefore, the return command should always be at the end of a return procedure because any commands after a return command won't be executed.
Warnings and Other User Helpers After you distribute the tool to the animators, you will realize that changes are necessary because of different approaches to using the interface. If a specific workflow has to be followed in your GUI to provide perfect functionality, consider inserting warnings or notes to guide the user. Maya provides the following types of alerts:
Summary
Warning A warning is purple text that is displayed in the command feedback line. When you provide a warning in your GUI, use the following syntax: warning "text you want to be showndisplay"; Error Message An error message is red text that is displayed in the command feedback line. When you provide a warning in your GUI, use the following syntax: error "text you want to be showndisplay"; Alert An alert is a dialog box that contains text and/or a button selection. You use alerts to force the user to confirm a certain action. A popular example of an alert is a dialog box that asks if you really want to quit a program. Annotation An annotation is a little pop-up text field that explains the function of a button, an icon, or some other control. The flag is available for nearly all control types and has the following form: - a n n / a n n o t a t i o n "text that you want to be showndisplay"
In addition to the tool we created in this chapter, you might want to develop a control for animating the fingers and a control for facial animation. For both face and hands, I recommend implementing sliders. Using the a t t r F i e l S l i d e r G r p command creates an embedded update between Channel box and slider interface values with the flag - a t t r i b u t e . Here's an example. / / p o s s i b l e d e c l a r a t o n i n s i d e the m a i n p r o c e d u r e a t t r F i e l d S l i d e r G r p - m i n 0 - m a x 1 -s 1 - p r e 0 - c a t 1 " l e f t " 1 - c a t 2 " l e f t " 1 - c a t 3 " l e f t " 1 - c w 1 80 -cw 2 50 -cw 3 10 -cc " M o K e y c h a n g e ( \ " l e f t A r m _ M O C A P \ " , \ " M o C a p a r m l \ " ) " -1 "left a r m " MoCaparml' // editing of the slider group in the MoKeychange procedure a t t r F i e l d S l i d e r G r p -e - a t ($prefix + " a n i m C o n t r o l s . " + $ a t t r ) $ i n d e x a t t r ;
You can help your animators speed up their animation by creating a database of facial expressions and/or hand positions. Place them on a shelf with images of various states. Add a special feature that saves additional expressions or positions that the animator creates. If an animator creates poses they want to reuse, they can record the pose, and it is included into the database. This database is open to all animators. With a database of character-specific expressions or poses, you can ensure the consistency of animations for a single character. At the same time, animation output per week will increase.
Summary This chapter showed you how to plan, prepare, and program an animation GUI. I described various approaches and showed you how to avoid certain problems. Using this overview, you can now play with layout elements to develop GUIs that suit your own personal style. Be aware that the solutions I described present only one way, but you can achieve something in several ways. Use your imagination to find your own.
289
Effective Natural Lighting Frank Ritlop
at a major studio, at a two-person production company, or on your own personal project, you likely have a set deadline for completing your assignment, and it's usually tight, lf photo-realism is the rendering goal, you need to work both efficiently and intelligently to finish the project on time, you may have access to a thousand-processor render farm, but chances are you need to share it with a dozen others, so computer speed is usually also an issue. This chapter covers methods for getting photo-realistic results while keeping in mind the need for short production and render time. Specific steps interspersed throughout the chapter will help clarify the theory along the way. As in any field, it takes time and practice to acquire the lighting skills to create photo-real virtual environments. It's not rocket science, but gettinggreat results requires a thorough knowledge of lighting basics; so that's what we'll begin with. Even if you've been doing lighting for a while, going over these basic principles will be a good refresher and will ensure that you understand Maya's terminology.
Lighting Design Before you roll up your sleeves and start the actual hands-on work of lighting, you must establish what your scene will look like. This generally happens in a meeting with the client or the director, either of whom may already have a mental picture of the scene. If you're working on your own project, you'll likely have a clearer idea of what you want to achieve.
294
C H A P T E R 10 • Effective Natural Lighting
Illustrations provided by the client or director can save you days of treading down the wrong path. If you have only verbal descriptions and hand gestures, you'll need to start interpreting, and that can waste valuable time. Lighting design is the process of creating the look of a shot that you can use as a guideline for other shots in a scene to achieve consistency. If you're working on a single image, you'll still need to go through the process, working closely with the client or director to come up with what they want. This usually involves establishing lighting for the set first. This task can take anywhere from a few hours to a few days, depending on the complexity of the scene. Keep your setup as simple as possible. The fewer lights you use, the more manageable the shot, not to mention the faster it will be to render. Using fewer lights also lets you get quicker feedback, which is essential in a fast-paced project. As a starting point, you might want to use reference material such as photographs to generate ideas. Photographs show how light bounces, diffuses, and reacts in different situations. If you've never touched lighting before, a trip to the theater can be useful. You can see how the actual instruments are used to create light. Movies are also a great source of lighting ideas. Simply studying still images from films to see where light is coming from or what colors are being used can help you create a more interesting lighting environment for your scene.
Lighting Passes Using multiple lighting passes to render your scene gives you greater control over the final look of your environment. For example, you can render a scene totally in one raytraced pass, but if you want blurred reflections or a change in the depth of field, the only option is to rerender the entire scene. On the other hand, if you have a separate reflection pass, a z-depth pass, and so on, you can manipulate those layers and combine them to get the look you want without having to rerender. If you're working for a studio that has a compositing department, either you'll be told what lighting passes they require, or you'll need to keep some good notes that describe what each layer contains so that the compositor doesn't have to hunt you down to find out what you've handed over. If you have characters in the set, you'll probably want to render them as a separate layer for the following reasons: • If there is a lighting mistake with your character or set, you don't need to rerender the entire scene. • You'll have an easier time lighting your character with lights that you extract from your final set lighting without having to do elaborate light linking. (I'll discuss light linking a little later in this chapter.) • Memory issues involving heavy sets or characters might not permit you to load all elements at the same time. • You can composite your character and set after all the elements are completed and use post-processing effects to better marry your layers.
The Properties of Light Just as an animator must study motion in humans and animals before being able to reproduce their movements on paper or in the computer, the lighting artist must study how light
Mapping a Texture to a Light's Color Attribute 295
looks, reacts, bounces, and reflects in the real world in order to do a convincing job. How can you make it obvious to the viewer that they're looking at a scene that is being lit with incandescent bulbs instead of fluorescent tubes or that a scene is taking place on a sunny midday afternoon rather than a cloudy overcast afternoon? The ability to distinguish between different qualities of light is a fundamental requirement of lighting artists in this industry. Before going any further, let's look at some basic attributes of Maya lights. In Maya, all light types share two common attributes: color and intensity. You can map the color attribute with a texture, animate it, or leave it as a solid color. The intensity attribute accepts single value input to control the brightness of a light source. You can also use negative values, which result in light subtraction, and can diminish light on specific surfaces or create shadow effects without the added render expense of calculating actual shadows. You can also map intensity with the alpha channel of a texture, or if the alpha channel doesn't exist, you can designate the luminance as the alpha channel. lf you don't have 256MB of RAM on your workstation, you might encounter slowdowns as you work through some of the examples in this chapter. With memory being so affordable these days, consider upgrading your machine to 512MB.
Mapping a Texture to a Light's Color Attribute In the real world, almost every light source possesses nuances with respect to shape and color that sets it apart from all other light sources. You can add some subtle rings to a spotlight by mapping a texture into the color attribute. Follow these steps: 1. From the main menu, choose File Project New to create a new project. Enter lighting Tutorial in the Name text box. LM click the Use Default button to create a default set of directories within your new project, and then LM click Accept. 2. From the main menu, choose Create Lights Spot Light, and in the Outliner, rename your light to ceilingLight. 3. From the main menu, choose Window Attribute Editor or press Ctrl+A to open the Attribute Editor. You should see a default set of light attributes, as shown in Figure 10.1. 4. LM click the map button (the checkered box next to the Color slider), to open the Create Render Node dialog box, which is shown in Figure 10.2. 5. Click the Textures tab, and then click the Ramp button in the 2D Textures section to display your Attribute Editor's Ramp Attribute. 6. Rename the ramp to ceilingLightRamp, select Circular Ramp under the Type attribute, and select Smooth under Interpolation. 7. Click one of the color control circles to the left of the ramp box to activate the Selected Color and Selected Position attributes for that part of the ramp. To position a color control point, LM click and hold it while dragging it up or down. 8. Click the color swatch next to the Selected Color attribute to open the Color Chooser. 9. Desaturate the color, change the hue until you get a pale yellow, and then click Accept. 10. Repeat steps 7, 8, and 9 to change the color of another control point in the ramp or add other color control points to the ramp by LM clicking directly in the ramp box. To
296
C H A P T E R 10 • Effective Natural Lighting
Figure 10.1: The Attribute Editor for ceilingLightShape
Figure 10.2: The Create Render Node dialog box
delete a color control point at any time, LM click the small square box to the right of the ramp. You should attempt to get a ramp that looks similar to the one in Figure 10.3. 11. To save this light to your current project's scenes folder, select your light, choose File Export Selection from the main menu, and name the file e x e r c i s e 1 . We'll use this file later in this chapter. lf you want to map the intensity attribute with a texture instead of with the color attribute, adjust your material's Alpha Gain to control the intensity of your light.
Light Types
297
Decay rate defines how quickly light intensity diminishes with respect to the distance traveled. In the real world, light decays with the square of the distance, a quadratic decay rate. For example, a wall has a given luminosity when lit with a flashlight from 10 feet. To get the identical surface luminosity from a flashlight placed 20 feet from the wall, you'd need a bulb 4 times as intense. A cubic decay rate requires a source 8 times more intense to equally illuminate a surface at 20 feet. With a linear decay rate, light diminishes in intensity with a 1:1 proportion with respect to distance from the light. If there is no decay rate, a light's intensity remains constant regardless of the distance the light must travel before reaching a surface. Figure 10.4 shows how different decay rates affect a simple scene.
Light Types When using Maya, you can work with ambient, directional, point, spot, and area light tools. In this section, I'll describe these light types, explain the properties that are specific to each, and suggest possible uses.
Ambient Light Ambient light can best be likened to light that strikes an object from every direction, as in the dim, directionless light that exists just after sunset. In Maya, you use the Ambient Shade attribute to control how light strikes the surface of an object. Adjusting the value between 0 and 1 results in a corresponding shift from directionless light to light emanating outward from a single point. Figure 10.5 shows some Ambient Shade settings using scanline rendering.
Figure 10.3: The Ramp attributes
Although ambient light behaves similarly to a point light when you set Ambient Shade to 1, at render time, bump mapping is ignored, which results in flat surfaces that may not properly represent your object's surface attributes.
Unlike other light types, ambient lights do not emit specular light regardless of the Ambient Shade setting. Also, it is not possible to create depth map shadows with ambient
No Decay
Linear Decay Rate
Figure 10.4: Different decay rates
Quardatic Decay Rate
Cubic Decay Rate
298
C H A P T E R 10 • Effective Natural Lighting
AmbientShade:0
AmbientShade:0.45
Ambient Shade: 1
Figure 10.5: Ambient Shade settings for an ambient light
light, but raytraced shadows are supported. (I'll discuss shadows in detail later in this chapter.) If Use Ray Trace Shadows is turned on for an ambient light, the far side of an object will not receive any light during raytrace rendering, leaving Ambient Shade to affect only that area of a surface exposed to light rays. To produce photo-realistic results, many lighting artists shun the use of ambient lights in setups. Lightening shaded areas with other light sources produces better results than using the Ambient Shade attribute. Ambient lights flatten the look of the final image, especially when used to illuminate concave areas, such as the inside of a character's mouth or the nostrils where flat lighting would not be commonly found.
Directional Light You can think of directional light as a light that is emanating from an infinitely great distance, resulting in parallel rays. Sunlight hitting the earth is as close as you can get to an example of directional light. It doesn't matter whether you're standing in your backyard or on the other side of town, by the time the sun's rays reach you, they are nearly parallel. Also, sunlight's intensity does not change measurably from one part of town to another, resulting in a negligible decay rate. Directional light in Maya does not decay, making it a suitable light type for re-creating sunlight and moonlight. Directional light is useful for lighting vast areas in a scene. On the other hand, you can just as easily affect small areas of your scene by using light linking or objects, such as a wall with a window frame, to mask the light. Since directional light does not have a starting point from which light emanates, objects positioned in front of, behind, above, or below a directional light are lit identically with respect to intensity and direction.
Point Light A point light emits omnidirectional light from a single point in space. Point lights are useful for simulating light sources such as candles, incandescent light bulbs, or spill light emanating from an isolated brightly lit surface, such as a theater stage that is lit with a bright narrow spotlight.
Spotlight Spotlights illuminate a cone-shaped volume with the source being located at the tip. You define the shape of a spotlight by giving it a cone angle in degrees. Penumbra angle and
Light Types
Figure 10.6: An intensity curve spotlight (on the left) versus a nonintensity curve spotlight (on the right)
dropoff are two additional properties that give you control over how the edge of the light tapers off. A positive penumbra value fades the light intensity outside your original cone angle in degrees, whereas a negative value tapers the edge inward by the value entered in degrees. Dropoff defines how light is distributed within the cone angle. A value of 0 distributes the light evenly throughout the cone angle. Increasing the dropoff diminishes light intensity toward the edge of the cone. With spotlights, you can easily create intensity curves from the Light Effects section of the Attribute Editor. You can adjust intensity curves in the Graph Editor and control the intensity of light at any given distance from your spot's origin. This is extremely useful for gradually fading lights in and out to create a more photographic lighting situation while avoiding clipping. A perfect example of clipping can be seen near the origin of the spot in the image on the right in Figure 10.6. In the image on the left in Figure 10.6, intensity near the origin tapers off more as would be seen by the naked eye. The contrast ratio perceived by the human eye is far greater than any film stock or video technology, so you might think that there should be hot spots or blown-out areas in the frame, to more closely match what you'd see on film or video. I agree, but keep such areas to a m i n i m u m . Most photographers and cinematographers strive to reduce glare and burn using reflectors, scrims, and blockers. Clipping, or over lighting, in computer graphics has a completely different look than that of film or video and needs to be minimized to avoid calling attention to itself.
Area Light With area lights, you can create diffuse light sources just as professional studio photographers do using soft boxes or bounce flash. They use these techniques to get soft, pleasing shadows and highlights. Used effectively, area lights can be great for simulating light from fluorescent tubes and panels, from bounce cards, or from window light. Area lights are represented as rectangular planes with a line segment protruding from the center to designate their illuminating side. You can shape them using the Scale tool to increase or decrease the apparent emission area resulting in an increase or decrease of the intensity of the light as well. The combination of the Intensity attribute and the size of the plane dictate the intensity of the area light. Decay rate also plays a role in controlling how light is emitted. Figure 10.7 shows how increased decay rates result in increasingly spherical light emission, a result of the falloff beginning from the area light center and not at its edge.
299
300
C H A P T E R 10 • Effective Natural Lighting
No Decay
Linear Decay Rate
Quadratic Decay Rate
Cubic Decay Rate
Figure 10.7: Different decay rates with area lights You'll get better results by setting the Decay Rate attribute to No Decay. By default, area lights decay using a quadratic falloff from the edges of the light-emitting plane.
Light Linking One feature that you'll find invaluable is the ability to selectively designate a light to shine on specific objects within your scene while foregoing others. This is termed Light Linking and plays a major role in creating photo-realistic lighting situations. Use the Relationship Editor's Light Centric Light Linking menu set to create the connections between lights and objects, by selecting the light in the Light Sources list and then selecting the component(s) that you want to link it to in the Illuminated Objects list. Or you can physically select your light and objects within your scene, and then from the Rendering menu set, choose either Lighting/Shading Make Light Links or Lighting/Shading Break Light Links to link or break the connection between a light and an object. These two methods are the most common ways to create the necessary connections between lights and objects within scenes.
Adding Multiple Ceiling Lights with Intensity Curves In this section, we'll position, assign intensity curves to, and make duplicates of the ceilingLight you created earlier in this chapter. You'll see how eliminating clipping near the ceiling light's origin using intensity curves can prove invaluable when creating a natural-looking scene. Follow these steps: 1. Load the s e t . m b scene file from the c h a p t e r 1 0 / s c e n e s folder on the CD-ROM, and import the e x e r c i s e 1 file you previously saved. 2. To select ceilingLight and the ceiling geometry, select the light and then Shift+select the ceiling from your perspective view. Choose Lighting/Shading Break Light Links from the Rendering menu set to prevent the ceiling from casting a shadow on the entire set. 3. Open the Attribute Editor for ceilingLight, and set the Decay Rate to Quadratic, the Cone Angle to 90, and the Penumbra Angle to -45. 4. In the Depth Map Shadow Attributes section, switch on Use Depth Map Shadows, switch off Use Dmap Auto Focus, and enter a value of 90 for you Dmap Focus. 5. Constrain the ceilingLight to the ceilingLamp01, by selecting the ceilingLamp01 group node found in the setlceilingLights node in the Outliner or Hypergraph, Shift+select the ceilingLight, and choose Constrain Point from the Animation menu set (to ensure the light is positioned correctly, check that the ceilingLight_pointConstraintl weight is set to 1 in your Channel box). Rotate the light by -90 on X to point it downward. 6. Now that your light is positioned, you can create the intensity curves. Open your ceilingLight Attribute Editor, and in the Light Effects section, click the Create button next to Intensity Curve to open the lightInfo Attribute Editor.
Light Types
7. Choose Window Rendering Editors Render View from the main menu to open your Render View. 8. From your Render View menu, choose IPR IPR Render shotCam. When the render is complete, you might suspect that something went wrong since everything is black except for the ceiling bulbs and the lampshade. Actually, your decay rate is set to Quadratic, so you'll need to increase the light intensity before you see any detail. LM click and drag out a rectangular box in your Render View just below the ceilingLight, to force the IPR renderer to update this area as you make changes to your light. 9. Reselect your ceilingLight from the Outliner or Hypergraph. 10. Choose Window Animation Editors Graph Editor to open the Graph Editor. Choose View Frame All to fit the curve in the editing window. You should have a curve resembling that in Figure 10.8. What you have is a graphical representation of the ceilingLight Intensity attribute; the vertical axis corresponds to the ceilingLight intensity, and the horizontal axis represents the sample distance from the light source. The height of the room is about 22 units, so any adjustments to keys past a sample distance of30 units won't affect your scene.
11. Select the first four keys and raise them until you start to see the brick texture of the wall under the light. 12. Select the key at a sample distance of 0, and decrease it until you eliminate the clipping from the wall. You may need to adjust the sample distance of the remaining keys until you are satisfied with how the light starts to fall off. 13. Once you get something similar to Figure 10.9, choose Edit Duplicate from the main menu. Set the Number of Copies to 2, ensure that Geometry Type is set to Copy, and ensure that the Group Under option is set to Parent. Click Duplicate to complete the operation. 14. As you did in step 2, select both new lights and the ceiling, and then select Break Light Links from the Lighting/Shading menu. 15. Via the Hypergraph, delete the constraints that have been copied under your lights, and constrain your copied lights to the ceilingLamp02 and ceilingLamp03 group nodes as you did with your original in step 4.
Figure 10.8: The intensity curve for ceilingLight in the Graph Editor
301
302
C H A P T E R 10 • Effective Natural Lighting
16. To control the color of the ceiling lights globally, we'll connect the ramp texture feeding your ceilingLight to the other two copies. In the Outliner, choose Display Shapes, and select the ceilingLightShape node. 17. Open the Hypershade and click the Show Upstream Connections button. Your nodes and connections should look like those in Figure 10.10. 18. MM drag and drop the ceilingLightlShape node from the Outliner to the Hypershade Work Area. 19. In the Work Area, MM drag the ceilingLightRamp node over the ceilingLightlShape node and release. Select a color from the pop-up menu. This operation connects the ceilingLightRamp to your duplicated ceiling light. Figure 10.11 shows that now both ceilingLightShape and ceilingLightlShape share the same color texture. 20. Repeat steps 18 and 19 for ceilingLight2Shape. 21. You'll want to create new intensity curves for your duplicated lights since these connections were not copied when you duplicated your ceilingLight. Before you do, open your ceilingLight Attribute Editor and click the Input box next to the Intensity slider (alternatively, select the Intensity Curve tab in the Attribute Editor) to open the IntensityCurve Attribute Editor, as shown in Figure 10.12. This figure shows a tabulated version of the keys you adjusted earlier through the Graph Editor. Click Copy Tab at the bottom to create a floating window that you can use as a reference when you change the values for the other two lights. 22. Create a new set of intensity curves for ceilingLightl as in step 6, but in the LightInfol Attribute Editor, click the output connections button found next to the lightInfo input box to access the IntensityCurvel Attribute Editor. 23. Copy the Intensity Curve values from your floating window for ceilingLight to your ceilingLightl IntensityCurvel Attribute Editor. Repeat steps 22 and 23 for ceilingLight2. 24. Lets add one other effect to the light before we render the scene. In the Light Effects section of ceilingLight, click the map button next to Light Fog to open the lightFog Attribute Editor.
Figure 10.9: The ceiling light with adjusted intensity curves
Figure 10.10: Hypershade connections for ceiling LightShape
Light Types
25. Adjust the Color attribute to match the color of ceilingLight, and decrease the Density value significantly (you'll see why once you start rendering). 26. IPR render from the shotCam perspective, and adjust the intensities and fog density until you get something close to Figure 10.13 as your result. If you don't like the light rings created by the ceilingLightRamp, access the ramp attribute through any of the ceiling lights. By adjusting the one texture, you control the rings of all three lights. 27. Group the three lights by pressing Ctrl+G, and rename the group to ceilingLights. 28. Save your scene as e x e r c i s e 2 . You now have a starting point for your set lighting. You'll get to layer on other lights from other sources as we move through this chapter. It's easier to do lighting in stages, placing lights that have a definite direction or come from a motivated source first. This makes it easier to judge whether you're on the right track. Adding fill and bounce lights is a refining process that comes later.
Figure 10.11: Both ceiling lights share the same ramp texture.
Figure 10.12: The IntensityCurve Attribute Editor for ceilingLight
Figure 10.13: The final IPR render of ceiling lights
303
304
C H A P T E R 10 • Effective Natural Lighting
Shadows To get realistic results from your light setups, you'll need to master the use of shadows. You can have the best textures, the right colors, and the perfect intensities and decay rates for your lights, but if proper consideration is not given to shadowing the scene, it will lack realism. Knowing when to and when not to use shadows, and what kinds of shadows work best for the job, will save you precious rendering time. In Maya, you can render using either depth map or raytrace shadows. Although true transparency and refraction is possible only through raytracing, depth map shadows are still the fastest and most economical way to light your scenes.
Depth Map Shadows A depth map shadow is a non-anti-aliased z-depth projection from a light's point of view where a pixel holds a value that defines the distance from a light source to where light occlusion begins. Brighter pixels represent points closer to the light source, and darker pixels represent points farther from the light source. Acting similarly to a 3D mask, depth maps do not take into account transparency, although you can still attempt to fake it by using a solid color or by mapping a texture in the Shadow Color attribute. You can also create dissipating shadows, such as those coming from area lights, which can greatly enhance the realism of your lighting. Many CG film production companies still use depth map shadows almost exclusively to obtain high-end images. Considering that 2KB images can take hours to render a single frame even in scanline, raytracing is often many times slower and is usually considered only as a last resort when acceptable results can't be obtained otherwise.
You can toggle depth map shadows in the Attribute Editor of a light's shape node under Depth Map Shadow Attributes. Dmap Resolution represents the width and height of your depth map in pixels to give you a square depth map. The larger the resolution, the more refined the depth map, but the longer it will take to generate this map. The combination of maximizing the usable area of a depth map and keeping the resolution as low as possible results in faster rendering times with highquality results. As a starting point, use a resolution at which objects in your depth map will appear roughly the same size as the objects in your final rendered image, as illustrated in Figure 10.14. The Use Mid Dist Dmap toggle gives you the choice of working with either min or mid distance depth maps. Understanding how both types of depth map shadow types work will help you get better Figure 10.14: When you want crisp shadows, an object should occupy an equal or greater area in the depth map results in your lighting. than that of the final rendered image.
Thanks to Florian Fernandez (www.flo3d.com) for contributing the model and setup of the puppet character used in this chapter.
Shadows
In a min distance depth map, a pixel stores information that defines the distance from a light source to the nearest point of a shadow casting surface. This method of calculating a shadow presents a problem since occlusion begins on the very surface responsible for creating the shadow. The solution is to introduce some kind of offset in order to prevent the surface itself from being occluded. This offset is called Dmap Bias. Inputting a positive value for Dmap Bias offsets a shadow farther from the light source and vice versa. You'll likely never need to introduce a negative value into Dmap Bias since doing so defeats the purpose of having it in the first place. Entering a value that is too large can also result in objects appearing to float above a surface. The right setting might require some tweaking. Min distance depth maps work best when lighting scenes in which shadows cover a limited area or for geometry that contains concave organic shapes, such as in a human face. On occasion, you may encounter dark bands in your renderings that run across surfaces as in Figure 10.15. Although increasing your Dmap Bias might alleviate the problem, the best solution might be to switch to mid distance depth maps. A mid distance depth map shadow stores depth information differently from a min distance depth map. Pixels in mid distance depth maps contain distance information values based on the distance from the light source to the mid point between two shadowcasting surfaces. This method requires the presence of at least two surfaces that cast a shadow. When using mid distance depth maps, you'll find that introducing a Dmap Bias is not required for most situations since occlusion begins halfway between two surfaces. If the object happens to be very thin, however, such as the Figure 10.15: Banding from a min distance depth map blinds in the set, you might need to introduce a bias value to avoid self-shadowing. Mid distance depth maps work well when used to light mechanical constructions or to create shadows of objects scattered over a vast area such as a landscape. Notice in Figure 10.16 that the mid distance depth map looks more like an X ray while a min distance depth map resembles a topographic map. The Use Dmap Auto Focus toggle gives you the choice of setting Dmap Width Focus yourself or asking Maya to take care of it for you. Dmap Width Focus is the angle in degrees of your depth map for all lights except directional light. Directional light works with a camera's orthographic view and so requires the Orthographic Width value. You can obtain this value by looking through a selected directional light, framing your objects, taking note of the Orthographic Width attribute from that camera's shape node, and then entering it in the Dmap Width Focus of your directional light.
305
306
C H A P T E R 10 • Effective Natural Lighting
Min Distance Depth Map
Mid Distance Depth Map
Figure 10.16: Min and mid distance depth maps
With min distance depth maps, Dmap Filter Size also influences your Dmap Bias setting. As you increase the Dmap Filter Size, you increase the softness of the shadow in all three dimensions. This added softness spills over to the surface that is casting the shadow and begins occluding it once again. If you are using the default Dmap Bias together with a high Dmap Filter Size applied to the shadow, pixel jitter can occur on the surface. Some tweaking is needed to find the right Dmap Bias value for a particular light.
lf you test render a scene at a lower resolution, make it a point to test a few frames at final resolution just to rule out pixel jitter. lncreasing resolution for final render may require increasing your depth map resolution as well. Remember that this will require an increase in Dmap Filter Size to get the same results with a larger depth map.
Shadows
Adding Directional Light with Shadow to the Scene: Creating Moonlight In this section, we'll take the next steps in creating a lighting setup for your scene. We'll have window light from the moon spilling through the blinds. Follow these steps: 1. If you haven't done so already, load your e x e r c i s e 2 scene file. 2. Create a directional light by choosing Create Lights Directional Light from the main menu. In the Outliner, rename the directional light moonlight. 3. In one of your working views, choose Panels Look Through Selected to get a POV from moonlight. 4. Position your view so that you can see the floor and part of the couch through the window, and ensure that the set is visible and tightly framed in your view, as shown in Figurel0.17. 5. In your current camera view, choose View Camera Attribute Editor, and go to the Orthographic Views section. Take note of the Orthographic Width setting. 6. Click moonlight in the Outliner again to open its Attribute Editor, and go to the Depth Map Shadow Attributes. Switch on Use Depth Map Shadows if it's not already set, and then deselect Use Dmap Auto Focus. Enter the Orthographic Width value you obtained in step 5 in the Dmap Width Focus. You should end up with a value between 50 and 60. 7. Select Use Light Position to tell Maya to render the depth map from the moonlight's position. 8. Set the Dmap Resolution to 1024, set Dmap Filter Size to 2, and set Dmap Bias to 0. The higher resolution for the shadow map reduces the chances of pixel jitter, a Dmap Filter Size of 2 gives you a shadow that is not razor sharp, and a mid distance Dmap doesn't need shadow bias in this case. 9. Select the color swatch next to the Color attribute, and adjust the color until it is a pale blue, as shown in Figure 10.18. 10. Set the intensity to about 0.1, and exit the Attribute Editor. 11. Go back to your working view, and select shotCam from the Panel Perspective menu. 12. IPR Render your scene, and adjust intensity or color until you get results similar to that in Figurel0.19. 13. Group the moonLight as you did the ceiling lights earlier in this chapter, and call the group moon 14. Save your scene file as exercise3. Our scene is not much to look at yet, but you'll quickly notice a difference as we start adding some much needed motivated light from the floor lamp.
Figure 10.17: Looking at the set through moonlight
307
308
C H A P T E R 10 • Effective Natural Lighting
Figure 10.18: Moonlight color
Figure 10.19: Moonlight through the blinds
Dissipating Shadows The phenomenon of light spilling into an occluded area causing shadows to grow softer as they fall away from an object is referred to as shadow dissipation. The larger the emitting light source, the faster the shadow dissipates. In Maya, you can create these shadows using both scanline and raytrace rendering methods. You will find that depth maps are more work to set up; however, raytraced shadows take much longer to render. In raytrace rendering, dissipating shadows are supported by all light types, including ambient lights. Simply adjust the Light Radius value under Raytrace Shadow Attributes to control the size of your emitting source. Just to give you an idea of how much time you can save by rendering in scanline, the raytrace image on the left, in Figure 10.20, took more than 8 times longer to render than the image on the right using a depth map. If rendering time is not an issue and your project involves providing several still images, you'll probably get better results rendering in raytrace. Dissipating shadows will occur naturally, depending on the size you specify for your source, the proximity of one object to another, and so on. Dissipating shadows in scanline, on the other hand, can be effective in some cases but fall far short in others. You can decide for yourself when to use them in your scenes once you get a firm grasp of how they work. Creating dissipating shadows in scanline is similar to increasing the Dmap Filter Size of a shadow behind an object. In Maya, you create an anim curve, with the Dmap Filter Size paired against distance from the light source. We'll look at how to do this later in this chapter. Raytrace Shadows You can achieve more realism from your renders by using raytrace shadows; the tradeoff though is longer render times. In addition, if you render one light using raytrace shadows, you can't combine depth map shadows on the same render pass. Raytraced shadows can be composited together with a scanline render using image-processing software. Figure 10.21 shows how a raytrace shadow pass composited with a scanline rendered image can result in
Shadows
Raytrace Rendered Image
Scanline
Rendered Image
Figure 10.20: Dissipating shadows from raytrace and scanline renderings
a.
b.
c.
figure 10.21: A raytrace shadow pass (on the left), composited with a scanline rendered image (in the center), can give you high-quality shadows as a final result, as in the image on the right.
more realistic shadows. To obtain a shadow pass, assign a Use Background material with no reflectivity, and set the Shadow Mask to 1 for the objects in your scene. The resulting image contains the shadow information in the alpha channel, as shown in the image on the left in Figure 10.21.
Creating a Light Setup for the Floor Lamp Simulating light coming from a large surface can be difficult in computer graphics. We'll use area lights, point lights, and spots to get as convincing a result as possible from the floor lamp. This type of lamp gives off two distinct qualities of light: diffuse light from the lampshade, and harsh light from the bulbs inside the lamp emitting at the open ends at the top and the bottom. Lighting the Couch, Walls, and Coffee Table To create the diffuse light from the lampshade, follow these steps: 1. Load the last version of your scene. 2. Create a spotlight, and supply it with an intensity curve from the Light Effects section of the Attribute Editor. 3. Now that the source has an intensity curve, change the type from Spot Light to Area Light just below Spot Light Attributes.
309
310
C H A P T E R 10 • Effective Natura lLighting
Figure 10.22: Top and front views of lamp Area placement 4. In your Outliner, rename the area light to lampArea. 5. In your working view, select the lampshade. Click the magnet icon on the status bar at the top of your Maya window, and then select and MM drag the lampArea to position it on the lampshade surface. Click the Make Live button again to deactivate the lampshade. 6. From the top view, position the lampArea light slightly left of the lampshade center, and orient it so that it points between the couch and the wall. Scale it on Y until it nearly matches the height of the lampshade. The final position should resemble Figure 10.22. 7. In the Attribute Editor, change the color to a desaturated yellow, ensure the Decay Rate is set to No Decay, and switch on Use Depth Map Shadows. 8. In the Outliner, select the ceiling and floor from the setlwalls group node, the lamp group node (inside set), and the lampArea light. From the Rendering menu set, choose Lighting/Shading Break Light Links to break the light links between your lampArea and the selected objects or groups. 9. To create dissipating shadows for the lampArea, select lampAreaShape, open Hypershade, and display the upstream connections. You should have something like Figure 10.23.
Figure 10.23: Hypershade connections for lampArea
Figure 10.24: Making a connection in the Connection Editor
Shadows
10. From the main menu, choose Modify Transformation Tools Proportional Modification Tools In the Tool Settings, select Curve as the Modification Falloff, and select Create New from the drop-down menu to the right of the Anim Curve box. 11. In the Outliner, deselect DAG Objects Only from the Display menu, and rename your propModAnimCurve to lampAreaDmapFilterSize. MM drag and drop this node into the Hypershade. We'll need to connect the LightInfo node's sample distance to the lampBotDmapFilterSize node's input so that at render time, Maya will know where the light is positioned in world space and apply the filter size you designate at the sample distance from the light's origin. 12. MM drag and drop the lightInfo node over the lampAreaDmapFilterSize node. 13. In the Connection Editor, select sampleDistance from the lightInfo node Outputs column, and select input from the lampAreaDmapFilterSize Inputs column to make the connection, as shown in Figure 10.24. 14. MM drag and drop the lampAreaDmapFilterSize node over the lampAreaShape node, and select Other from the pop-up menu to gain access to the Connection Editor once again. 15. Connect the output from lampAreaDmapFilterSize to the DmapFilterSize attribute of you lampAreaShape. Figure 10.25 shows what your final connections should look like in the Hypershade. 16. IPR render the scene, and adjust the Intensity Curve and the Dmap Filter Size of your light until there is no visible clipping on the wall or couch closest to the lamp. Even though we're using mid distance depth maps, you'll need to increase the Dmap Bias setting to compensate for the increased blur at the far end of the room. Also, make sure that some light reaches the back wall. You should finish with something like Figure 10.26. 17. Duplicate lampArea, and in the Attribute Editor, switch the type to Spot Light and assign a new intensity curve to it.
Figure 10.25: Final connections for lampArea
Figure 10.26: The IPR render of lampArea
311
312 C H A P T E R 10 • Effective Natural Lighting
18. Change the type back to Area Light once again. You may have noticed that Maya restored the name of your lampArealShape to areaLightShapel in the Attribute Editor upon switching types. Rename it back to lampArealShape to avoid possible confusion later. 19. Turn off Illuminates by Default, and Ctrl+select the set: wallN group node from the Outliner. Under the Rendering menu set, choose Lighting/Shading Make Light Links so that lampAreal affects only the wall. 20. Rotate and position the lampAreal until it resembles Figure 10.27. 21. IPR render camShot and adjust the intensity curves to get an even lighting behind the lamp. When you've gotten results that are similar to Figure 10.28, continue with the steps in the next section.
Creating Light Emission from the Bottom of the Lamp To add harsh light that spills out of the bottom of the floor lamp, follow these steps: 1. For lighting beneath the lamp, a spotlight will do the trick. Create a spotlight and call it lampBot. 2. Point constrain the spotlight to the lamp group node to center it on the lamp. Delete the constraint, and, selecting only the Y manipulator, move it down until it is near the bottom of the lampshade. 3. Rotate lampBot -90 degrees on X so that it is pointing straight down. 4. In the Attribute Editor, change the color so that it is a pale yellow like the lampArea, deselect Illuminate by Default, and set the Decay Rate to Linear. 5. Light link the lamp to the floor and lampBase models only. Turning off llluminate by Default turns the light off in the scene, and then light linking it to specific objects in the scene results in only those objects being lit. 6. Set your lampBot Cone Angle to 90, and set the Penumbra to -15. 7. Make sure lampBot is selected, and then in one of your working views, choose Panel Look Through Selected. Move in or out until your cone angle roughly fills the lamp opening.
Figure 10.27: The position and orientation of lampArea1
Shadows
313
Figure 10.28: Two area lights simulate the soft light from the floor lamp. 8. Set your lampBot Dmap resolution to 128, and deselect Use Dmap Auto Focus. Looking through your light, determine the tightest Dmap Focus for your shadow by adjusting your cone angle until it encompasses only the lampBase geometry. (See Figure 10.29.) Enter the cone angle value you've found in your Dmap Focus, and remember to reset the cone angle back to 90. 9. IPR render your scene, and adjust the intensity until you get a setting similar to Figure 10.30. If you find it's a bit low on light, that's all right. Later, we'll layer another light on the lampBase and floor, which should bring up the intensity.
Figure 10.29: Adjusting to find the proper Dmap Focus for lampBot
Creating Light Emission from the Top of the Lamp Now we need to simulate light coming from the top of the lamp. Follow these steps: 1. You'll need to taper in the intensity again for Figure 10.30: The light level of lampBot this light, so create a spotlight and create an intensity curve. Call it larnpTop, and position it near the top of the lampshade. 2. Orient your light upward, and set the cone angle to about 90. Look through your lampTop, and position it so that the cone angle slightly encompasses the top of the lampshade. 3. Give lampTop the same color as lampBot, be sure the Decay Rate is set to Quadratic, and be sure that Use Depth Map Shadows is turned on.
314
C H A P T E R 10 • Effective Natural Lighting
4. You'll need to dissipate the shadow somewhat as it stretches up to the ceiling. Using steps 9 through 15 from the "Lighting the Couch, Wall, and Coffee Table" section as your guide, create the setup for dissipating shadows. 5. IPR render your scene and activate a region above the lamp to adjust. 6. In the Graph Editor, adjust both the intensity curve and the Dmap Filter Size curve until you have something that looks similar to Figure 10.31. Creating Soft Light for the Walls and Floor from the Lampshade Earlier we simulated light from the lampshade hitting the walls and couch. Now we need to simulate light from the lampshade on the floor and ceiling. This process will brighten the entire scene and make the light emanating from the lamp more convincing. Follow these steps: 1. Create a spotlight with an intensity curve. 2. Change the type to Point Light, and position it at the center of the lamp, making sure it is placed at the same height on Y as the two area lights. 3. In the Outliner, rename the light to lampshadeFill. 4. Change the color of the light to a pale yellow like that of the two area lights, and ensure that you have Use Depth Map Shadows turned on. 5. Break the light link between lampshadeFill and the lampshade, lampStem, and both LampSupRod groups to ensure that no lamp objects generate unnatural shadows or occlude the very light we're trying to set up. 6. IPR render out the scene and adjust the intensity curve until you get something similar toFigurel0.32. 7. Add a DmapFilterSize curve to your light to create dissipating shadows. 8. In the Graph Editor, set up your curve to start from a DmapFilterSize of 1 at a sampleDistance of 5, and increase both from there. If you want precise measurement between points in your scene, use the Distance Tool under Create Measure Tools from the main menu and then simply follow the Help Line tips.
Figure 10.31: The final IPR look of lamp Top
Figure 10.32: The dim light of lampshadeFill should add a bit more detail to the room while adding much needed light to the floor and ceiling.
Shadows
9. We'll need to simulate some kind of specular reflection on the floor from the lampshade also, so create an area light, name it floorSpecular, and point constrain it to the lampshade. 10. Rotate the light on Y to orient it toward the shotCam camera, and scale it until it is slightly shorter and narrower than the actual lampshade 11. Render out your scene once again. Your final results should resemble Figure 10.33. 12. Group all the lights for the lamp, and call it lampLights. 13. Save your scene as exercise4. If your results resemble Figure 10.33, congratulations! This is the most challenging part of this chapter. If you're still unclear about some of the steps from the preceding section, that's all right. You might have to try certain parts again before they sink in completely.
Figure 10.33: The final look of lighting from the floor lamp
Adding Bounce and Fill to Your Scene In this section, I'll guide you through adding the crucial finishing touches to your lighting scene in order to make it look more natural. Studying the render from the previous section, it's clear that some bounce light would definitely emanate from the floor to light the pillar, couch, ceiling, and walls to some extent. To make things manageable, we'll first add bounce to the walls. We'll then polish off the couch and add a fill light from an off-screen television. Adding Bounce Light to the Walls, Ceiling, and Floor
In this section, we'll simulate the radiation of light off objects in the scene to lighten the walls, ceiling, and floor wherever needed. Follow these steps: 1. Load the scene you previously saved as e x e r c i s e 4 . 2. We'll start by creating subtle fill light for the pillar, coffee table, and wall behind the lamp. Create a spotlight with an intensity curve, change the light type to Point Light, and in the Outliner call it bounceFromFloorLamp. 3. Place bounceFromFloorLamp in front of the lamp at floor level, and break all light links between it and the couch, lamp, and floor to avoid distracting shadows in the scene. 4. Turn off Emit Specular, and change the color to beige, similar to the color of the floor in Figure 10.34.
Figure 10.34: The light color of bounceFromFloorLamp
315
316
C H A P T E R 10 • Effective Natural Lighting
A good way to select bounce color is to render out, or load an image In the Render View, from the latest setup and select your color from the area your light should emanate from, using the Eyedropper tool in the Color Chooser.
5. Ensure that you're using a quadratic falloff as your decay rate. Turn on Use Depth Map Shadows, and de-select Use X+ Dmap, Use Y- Dmap, and Use Z+ Dmap since we only need a shadow to cast from the beam against the back wall. We'll tweak this light's intensity curve once we place a few more lights in the scene 6. To diminish the contrast on the far wall created by the shadow of the couch, add a point light near the far arm of the couch. 7. Name this light bounceFromCouchEastWall. 8. Use a desaturated beige, paler than that used to define the bounceFromFloorLamp light color, set Decay Rate to Linear, and turn off Emit Specular once again. 9. Light link it exclusively to the wallEast and blinds group nodes inside set. 10. Turn off Use Depth Map Shadows for this light and keep the intensity very low (an intensity of 1 should suffice because this light serves only as a subtle fill). 11. Now we need to add a bounce light for the ceiling. Create an area light and call it bounceWallNorthOnCeiling. 12. Assign it a desaturated yellow color, turn off Emit Specular, and light link it to only the ceiling. 13. Position bounceWallNorthOnCeiling halfway between the couch and the ceiling against the north wall. Center it below the middle ceiling light, and scale on X so that its ends stretch past the first and last ceiling lights. Remember to rotate the light on Y so that it is facing the room. 14. We'll create a light to add a hint of detail to the far side of the pillar. Create an area light, name it bouncePillarFarSide, and scale it on all axes by 4. 15. To position bouncePillarFarSide, choose Look Through Selected in your working view Panels menu, and then orient the view to see the dark side of the pillar and a majority of the room. 16. Light link bouncePillarFarSide to only the wallNorth group and ceiling, and disable Emit Specular in the Attribute Editor. You won't need shadows for this light. 17. The last light you'll need to add for this part will be used to fill in pure black areas on the floor. Create a spotlight called fillFloor with a linear decay rate, and position it near the wallW geometry. Turn off Emit Specular, and light link the spotlight exclusively to the floor. 18. While looking through your light, you should have the floor below the pillar centered in your light cone. Set the Penumbra Angle to the negative value of your cone angle to make the light fade out gradually toward the edge. Your resulting light positions should resemble those in Figure 10.35. 19. IPR render the scene with your lights, and adjust the intensities and intensity curves until your results look like Figure 10.36. lf you find that any of your lights have become too intense, "blowing out" the scene, be sure to go back and adjust them accordingly.
Shadows
Figure 10.35: The final positioning of the bounceOnWall setup 20. Group your lights, and call the group bounceOnWalls. 21. Save your work as e x e r c i s e 5 . Adding Bounce Light to the Couch Now let's get some bounce light on the couch. Follow these steps: 1. Create a spotlight with an intensity curve, name it bounceCouchWest, and place it at the exact location as your bounceFromFloorLamp light. You might want to copy the intensity curve values you got for bounceFromFloorLamp to bounceCouchWest because they should be roughly the same. 2. We'll take this opportunity to add some bounce to the base and stem of the floor lamp as well. Figure 10.36: The final look of bounce lighting for the Disable Illuminates by Default in the Attribute walls, floor, ceiling, and coffee table Editor, and then light link bounceCouchWest to the couch, the lampBase, and lampStem geometry. 3. Look through the selected light and orient it toward the couch, ensuring that the cone angle is sufficiently large to encompass the entire couch. Take note of the cone angle and enter it in the Dmap Focus for this light. 4. Enlarge the cone angle of the light until you can see the base and stem of the floor lamp. 5. Choose a light color similar to that of bounceFromFloorLamp, and set Decay Rate to Quadratic. Your Dmap Resolution for this light doesn't need to be larger than 256, and the Dmap Filter Size should be about 4. 6. Again, we'll add another couple of lights to the scene before rendering. Create another spotlight, and call it bounceCouchCenter. 7. Light link bounceCouchCenter to the couch, place it below the floor in front of the couch, and ensure that the entire couch is visible when looking through bounceCouchCenter. 8. Select No Decay for this light, use a low resolution of 128 for the shadow map, and set the Dmap Filter Size to about 4.
317
318
C H A P T E R 10 • Effective Natural Lighting
Figure 10.37: The fmal placement of bounce lights on the couch within the scene 9. Even with these two lights, you may find that, upon rendering, the cushions on the couch are still quite dark in the lower regions. Add another spotlight, and light link it to only the throw pillows, the side cushions, and the back cushions. 10. Set Decay Rate to No Decay, and set Dropoff to 1 to allow the intensity near the edge of the light to diminish slightly. Position the edge of the light in front of the couch and slightly below the bottom cushions. Figure 10.38: The final results of bounce lighting on the 11. Use depth map shadows with couch a resolution of 256 for this light and a Dmap Filter Size of 4. Your final setup for the bounce lights on the couch should resemble that in Figure 10.37. 12. IPR render your scene, and adjust intensities until you get something similar to Figurel0.38. 13. Group your couch lights under a group node named bounceOnCouch, and save your scene. Adding Light from an Off-screen Television This is the last addition to the set lighting and helps to break up the otherwise yellow hue to most of the lights in the scene. Follow these steps: 1. Add an area light to your scene and call it tvFill. 2. Scale tvFill by roughly 5 on X and 3 on Y, and place it off-screen slightly above and facing the coffee table.
Light Rigs
Figure 10.39: All lights including the tvFill complete the set lighting.
3. Give the tvFill a pale blue color, select Use Depth Map Shadows, and set Dmap Filter Size to 8. 4. IPR render your scene. Adjust the intensity until you get something similar to Figure 10.39. If you want, you can use a noise function to vary the intensity of the tvFill light to simulate the varying intensity of a television set. 5. Group this light under a group node named tvLight. 6. Select all the light groups you've created thus far and group them. Call this group lights, and save your scene file as e x e r c i s e 5 . lf you followed the steps closely, you now have a complete light rig. lf you've never paid much heed to using descriptive naming standards or structure in your work, you'll find your self lost amid the many lights that constitute a setup similar to this. By grouping lights in an organized fashion, and not throwing them throughout your scene, you can easily navigate and understand what each light's role is in the lighting of your set. This is important when working with others who may need to work with your light setup or if you have a problem that needs troubleshooting.
Light Rigs Because the process of creating a good lighting setup can be time consuming, a structured workflow is crucial. Essentially, a light rig is a set of lights parented under either a group or a locator top node and positioned for a specific element in a scene or for a particular layer. This node can be constrained to a target element so that all lights follow its movement, as you'll see in the next section. This type of setup allows for unlimited movement of the subject in the scene without concern for shadow map imprecision. Using one large shadow map that encompasses a large area of a scene limits how close you can get to the subject before experiencing pixel jitter or other nasty rendering artifacts. Also, a constrained light rig can allow for lower shadow map resolutions for subjects that occupy a smaller area on the screen.
319
320
C H A P T E R 10 • Effective Natural Lighting
Once you tweak and position a lighting rig for a particular subject in a scene, you can duplicate it and constrain it to other elements within the same shot or to elements in other shots that have similar ambient lighting conditions.
Lighting the Doll In this section, we'll create a simple light rig for a puppet character walking through the scene. Follow these steps: 1. Load the e x e r c i s e 5 scene file. 2. Import the p u p p e t . m b scene file found in the c h a p t e r 10 folder on the CD-ROM. 3. From the Lighting/Shading menu, choose Light Linking Object-Centric to open the Relationship Editor. Under Object Centric Light Linking in the Relationship Editor, select an object or a surface. From the Light Sources column, choose which sources will illuminate the object by simply selecting or deselecting your lights. 4. Select the Puppet group node from the Illuminated Objects column, and then deselect all lights from the Light Sources column except for the ceiling lights, lampshadeFill, moonlight, bounceWallNorthOnCeiling, and the tvFill to make light links between the puppet and those lights. Only those lights now illuminate the puppet. 5. If you go to frame 60, you may notice upon rendering that there's a need to add some light to the puppet in order to integrate her better into the scene. Create a Point Light and call it lampshadeFillPuppet. 6. Set Decay Rate to Linear, select an off-white color similar to the lamp lights, and point constrain this light to lampshadeFill under the lampLights group you created earlier. 7. Light link this light exclusively to the puppet group node. Light linking lampshadeFillPuppet to only the puppet geometry poses a problem lf the lampshadeFill light isn't already linked to the puppet. The problem arises because objects such as the couch don't project shadows on the puppet because lampshadeFillPuppet Is not light linked to the couch. When she walks behind the couch, she is fully lit below her knees where no light should be hitting, making It obvious that she Is lit separately from the rest of the scene.
We'll use the shadows generated by lampshadeFill and plug them into lampShadeFillPuppet, because they are both identical light types and are positioned in exactly the same location. Moreover, lampshadeFill generates shadows containing depth information for the puppet and the rest of the set, making it possible for shadows from the couch to fall on the puppet as she walks past it. 8. Ensure that lampshadeFill has Reuse Existing Dmap(s) set in the Disk Based Maps drop-down menu in the Attribute Editor and that both lampshadeFill and lampshadeFillPuppet have Dmap Resolution set to 512. 9. Deselect Dmap Scene Name and Dmap Light Name, and toggle on Dmap Frame Ext for both the lampshadeFill and lampshadeFillPuppet. You might want to consider deselecting Dmap Frame Extension when performing IPR renders. The IPR render seems to have issues with depth maps using frame extensions and does not employ them when fine-tuning a region.
Light Rigs
9. For lampshadeFill, enter depthmapLampshadeFill in the dmapName box. 10. From the main menu, choose Window General Editors, and open the Connection Editor. 11. Load the lampshadeFillShape attributes in the Outputs column and the lampshadeFillPuppetShape attributes in the Inputs column, and connect both Dmap Name attributes. LampshadeFillPuppet now shares the depth maps created by lampshadeFill. Be sure to display shapes (choose Display Shapes) in the Outliner, or you may not be able to load the shape attributes for the LampshadeFillPuppet or lampshadeFill. 12. Let's add a bounce light that will travel with our puppet as she walks through the set. Create a spotlight called bouncePuppet with No Decay and Emit Specular de-selected. Give the spotlight a color similar to the brightest part of the floor. 13. Select Use Depth Map Shadows with a Dmap Filter Size of about 6. 14. Light link the spotlight exclusively to the puppet, and position it below and behind the puppet as shown in Figure 10.40. 15. Since our character moves through the set, the light position will be correct for only a couple of frames out of the whole shot. To correct this, create an empty group and point constrain it to the waist joint node under the puppetSkel group inside puppet. This group will follow the puppet for the duration of the shot. 16. MM drag bouncePuppet into this new group in the Hypergraph or Outliner, and presto—the light now follows the character for the length of the shot. You might want to throw out any depth maps that were generated earlier in this chapter to avoid shadows that are in need of being regenerated as a result of the addition of the puppet character. 17. IPR render the scene and adjust the light intensities until your results resemble Figure 10.41.
Figure 10.40: Bounce light from the floor on the puppet
321
322
C H A P T E R 10 • Effective Natural Lighting
18. Group the two lights for the puppet, and name your group puppetLights. 19. Save your scene as exercise6.
If you decide to render out the shot for the duration of the camera interval, you'll have a photo-realistic render of the puppet walking through the set.
Finishing Touches You may not have reflections like the image in Figure 10.42, but with a little more work, you can get similar results. Getting reflections
Figure 10.41: The final lighting of the puppet character and set
Figure 10.42: The final lighting of the set with simulated reflections on the floor and in the window
Summary
in scanline rendering sometimes requires reflection maps. However, when you have a large area that needs a reflection, such as a floor, inverting your set on Y, repositioning lights, and adding a little fog can get you better results than raytracing your scene. If you don't have compositing software, a well-selected transparency setting will give you control over how much of a camera image plane you allow to show through a surface. For the reflection in the floor, I copied my final scene file and inverted the set node on Y. Some lights needed rotation on X, and others, such as the area lights, needed to be repositioned completely. I used fog on height to obscure the reflected image on Y to avoid a perfect reflection typically associated with raytracing. Once the image was rendered, I mapped the image to an image plane in the final scene and added some transparency to the floor to allow the image to bleed through. I used the same procedure for the reflection in the glass at the far end of the room. You can get the same results in many ways. This method is simply one that works well in most cases. What's important is to know when to stop and when to go that extra yard. In this particular case, it made sense to add reflections in the floor and in the window. You can make other tweaks and adjustments to improve the look of this and any lighting scene, but as we all know, time is of the essence in any production.
Summary The focus of this chapter was to help you develop a better understanding of lighting basics while improving the quality of your photo-realistic lighting work through practical examples. I discussed how to simulate light from motivated and bounce sources using Maya's variety of light types, emphasized the importance shadows play in creating natural lighting environments, and covered different methods of light linking. The techniques employed in this chapter provide some alternate approaches to using Maya's tools and should help improve the quality of your work.
323
Distributed Rendering Timothy A. Davis Jacob Richards
works in parallel. At any given moment, billions of photons move through the air to strike a surface, hundreds of machines work together to build a car, and tens of people on a team work simultaneously to solve a problem. Computers work in similar ways. Although we usually think of a computer starting a task, performing the processing necessary to complete the task, and then terminating the task, computers can complete tasks in nonlinear ways. In this chapter, we'll show you how to use multiple machines for rendering with Maya. In general, using multiple processors for rendering is termed distributed rendering. You can perform distributed rendering on a specialized multiprocessor machine, such as the SGI Onyx2, on a collection of networked machines, or with both. Using a group of networked machines is called network rendering, and the cluster of machines used for this task is called a render farm. Every major special effects or animation studio uses some sort of distributed rendering, often with a render farm of Unix-based machines (SGI or Sun workstations) interconnected by a local area network. More recently, render farms have been springing up that consist of commodity PCs running Linux or Windows. As a result of the incredible advances in PC hardware performance, these machines can run circles around those that cost millions of dollars several years ago. Although these machines have become ubiquitous, few users actually take advantage of the
Master
326
C H A P T E R 11 • Distributed Rendering
incredible power at their fingertips. Consequently, billions of CPU cycles are wasted each day. Fortunately, rendering is incredibly CPU-intensive and can take advantage of those wasted cycles, often bringing the most powerful systems to their knees. Although Maya comes bundled with distributed-rendering software for Unix-based machines, it does not include any such software for Windows machines. Considering the power of such machines, a Windows-based distributed renderer could prove highly useful. For this reason, in this chapter we'll focus on developing distributed-rendering tools for PCs running Windows. We'll begin by discussing the distributed renderer that is packaged with the standard Maya release and some of its potential pitfalls. Next, we'll make some suggestions about how to get the most out of your network for Maya rendering, and then we'll turn to the details of our network renderer (called Socks) for Windows machines. During this discussion, we'll describe how to use this system and what's going on under the hood. In the end, we hope you'll have a better understanding of distributed rendering and how you can make it work for you.
The Maya Dispatcher Coordinating multiple rendering tasks across a network of machines can be a big job. Not only do you have to decide how to break up the rendering, but you must launch the rendering jobs on each of the participating machines. If you were to perform this task yourself, you would have to either launch the renders remotely (using something such as r s h ) or log in to each machine and start each render manually. You can break up a large rendering task in many ways, but two of the most common are on an image level and on a frame level. On an image level, you divide an image into multiple sections and send the pieces to different machines to render, as shown in Figure 11.1.
Figure 11.1: In image-level subdivision, a single image is divided into smaller regions that can be rendered by different machines.
The Maya Dispatcher
Figure 11.2: ln frame-level subdivision, a multiframe animation is divided into frames (or sequences of frames) that can be rendered by different machines. On a frame level, each machine renders a single frame, or a subset of frames, in an animation sequence, as shown in Figure 11.2. Since most large rendering tasks involve animations, we'll focus on rendering at the frame level. So, let's assume you have now started a render for a single frame on each machine in the network. As frames of the animation complete, you need to collect them and possibly move them to a single machine (if the machines do not share a file system). You also need to watch the render tasks so that as soon one finishes on a machine, you can start another render on that machine. As you can imagine, this process is tedious and time-consuming. And, hey, you have more important things to do (such as fix the mistakes you found in the frames that have rendered so far). Fortunately, Maya offers you some help on Unix systems with its bundled distributed renderer called Dispatcher. Dispatcher can run on any machine within the network to control distributed rendering from a single centralized location.
The Dispatcher Windows Contained within Dispatcher is the capability to create pools of computers, to queue jobs according to a user-defined priority, to cancel, suspend, and resume jobs, and to define rules for the computers in the render farm, such as hours of operation, number of CPUs to use, and
327
328
C H A P T E R 11 • Distributed Rendering
average load per computer. With these features, you can start a large render, leave it running, and come back later to see the results. Dispatcher takes care of most of the pesky details. Dispatcher also attempts to save some load time by sending each computer what it calls a packet of frames. A packet is really nothing more than a request for the computer to render a certain number of consecutive frames. Thus, the computer has the geometry, textures, and any other necessary information already loaded into RAM from the previous frame and does not have to access the disk or network to get this data to render the current frame. This makes sense when you consider, for example, that it takes about 15 minutes to load a 15MB scene file and 500MB of textures over a 10Mbit line for each frame you want to render. Even at 100Mbit speed, it takes about 2 minutes to load the scene. Therefore, you waste 1 hour of render time for every 1 second of animation produced at 30fps. Dispatcher is also nicely integrated in Maya's interface. In the Render drop-down menu in the main Maya window is an option called (Save) Distributed Render. When you select this option, Maya displays the Save/Render window (see Figure 11.3) that asks for a filename for your current scene. When you select a filename, Maya displays the Submit Job window (see Figure 11.4), in which you will find a number of useful options for rendering your animation. You can edit the distributed render command line, set the start and end frames for rendering, select a rendering pool of machines, and give your job a priority of 0-100 (with 100 being the most important and 1 being the least; 0 is used for suspending the job). Maya also opens another window titled The_Dispatcher_Interface (see Figure 11.5) in the background that displays your jobs and the machines they are running on once the render begins. The first section of this window shows each host, the job running on it, and the current frame number it is rendering. The middle section shows similar information, but based on job ordering rather than on host ordering. The bottom section lists jobs waiting to be rendered. The_Dispatcher_Interface window also contains several drop-down menus: File, Hosts, Jobs, and Pools. We'll discuss some features of the options on these menus below, but for a complete description, see the online documentation. Choose Hosts Configure to open the Host Configuration window, as shown in Figure 11.6. In this window, you can configure the client machines on which you want to render. The Host list box contains all the machines that the Dispatcher has access to and allows you to choose the machine you want to configure. Clicking the Enabled button lets Dispatcher use this machine in the distributed render; otherwise, Dispatcher is denied access to this machine. You use the options in the middle section of the Host Configuration window to restrict use of the
Figure 11.3: The Save/Render window allows you to specify an output file.
Figure 11.4: The Submit Job window provides a means for launching a distributed render.
The Maya Dispatcher
Figure 11.5: The_Dispatcher_Interface window displays information about render jobs.
machines by day and time. For example, if you want to allow users to send distributed renders to your machine only after 5 p.m. (that is, after you've gone home), you can set this machine to be available only Figure 11.6: The Host Configuration after 5 p.m. Monday through Friday. Select your window provides information about machine from the list, select the days you want to host machines. restrict (make sure they are highlighted), and use the sliders to specify when the machine is available. The next task is to specify the minimum amount of idle time that must occur on a machine before a distributed render can begin on it. (Idle time is the number of minutes that the machine has had neither keyboard nor mouse input.) Specifying idle time is useful in a public lab where you don't want to monopolize the processor if someone is currently using the machine, but where you would also like to start a render if someone is logged in but has stepped away for quite some time. The idle time can range from 0 to 30 minutes. You use the Maximum Jobs slider to control how many instances of Maya can be run at one time on the computer. Normally, this number matches the number of CPUs you have; however, this number doesn't specify how many CPUs can be devoted to each frame, so it is different from the -n option in the command line renderer. Instead, this value represents the number of distinct render processes that will be opened to work on individual frames. For example, if this option is set to 2 on a machine, that machine receives two frames to render concurrently; it does not use two processors to render one frame. Setting the Maximum Jobs number higher than the number of processors on the machine can cause serious processing delays. That is, doing so will start N renders, which could slow down your machine exponentially as context switches between the processes eat up CPU cycles, even if those processes are set to low priorities with n i c e .
329
330
C H A P T E R 11 • Distributed Rendering
By selecting options from the Jobs drop-down menu, you can view information about jobs, suspend and restart jobs, cancel jobs, and open the Submit Job window to start a new job. Additionally, these options let you change rendering parameters on the fly. You can use the Pool dropdown menu to create a new pool, delete a pool, and select (configure) machines to include in created pools (see Figure 11.7). Be aware that each rendering task consumes a Maya rendering license. These licenses are separate from the interactive licenses, so Figure 11.7: To configure a pool of machines, select the they will not interfere with other computers to include in the pool. work you want to perform. Further, you are typically allotted 100,000 rendering licenses, minus the number of interactive licenses. So, if you have 20 interactive licenses, you will have 99,980 rendering licenses, which is far more than enough for any manageable rendering task. If everything goes OK, you will see your job listed, along with a list of computers and the frames they are rendering, in the middle section of The_Dispatcher_Interface window. Unfortunately, we don't live in a perfect world, so the next section describes those cases when everything does not go OK.
Maya Dispatcher Caveats Although Dispatcher can be useful for large rendering jobs, it has some problems. It simply is not robust enough to detect, much less correct, rendering errors. In this section, we'll describe some of these problems. Dropped Frames Unfortunately, if one of the computers in your pool drops a frame, Dispatcher may have no idea that the frame is not complete. It just knows that Maya is not running on that computer and sends another frame for it to render. Obviously, this is not good since the machine is apparently experiencing a problem with the file or is experiencing some system error. Dropped frames can be especially annoying since f c h e c k and Adobe's After Effects may choke when trying to display an incomplete sequence of consecutively numbered images. A smarter Dispatcher would detect that the render did not complete successfully and send the frame to be rendered again. Whether it should send the frame to the same computer or a different (hopefully working) machine is up to the programmer. If a machine is constantly dropping frames, you might want to remove it from the render farm pool until you can figure out what's wrong with it. It could be that the computer can't access a file or doesn't have enough RAM to hold the current file. Or maybe another
The Maya Dispatcher
process running on the machine is using 99 percent of the processor. Who knows? The point is that many situations can prevent a machine from completing a render. One nice feature of Dispatcher is that when it does detect a dropped frame, it sends you an e-mail message telling you which frame was dropped by what machine and when. In some instances, though, the machine locks during rendering, and Dispatcher does not detect the lock-up. Instead, Dispatcher appears to think that the machine is rendering a particular frame for hours upon hours, so the frame never gets reassigned to another machine. The result is a waste of a lot of machine hours in addition to the dropped frame. Incomplete Frames Occasionally, the Maya renderer crashes in the middle of a frame render, leaving the image output file only partially complete. Currently, Dispatcher does not check for such situations; therefore, no corrective action is taken. It may well be that the machine that just experienced the render crash will be given a new task as if nothing had happened. This situation is not usually detected until you view the frames with f c h e c k or some other animation viewer. Although you might notice a dropped frame in a directory listing, an incomplete frame can be harder to detect unless you look at a detailed listing that shows all the file sizes. Sometimes, nothing is written to the file, so its size is 0, which is relatively easy to spot in a listing. At other times, though, the frame is closer to complete, so its file size doesn't seem out of line with the others. This is especially true with i f f files, the sizes of which are not constant across frames. Problems with Dynamics When working with dynamics, such as particle systems, you must take special care with distributed rendering because sometimes calculations are not performed until the render stage. Ideally, you would like multiple machines to work on the calculations to reduce overall render time, but this is not a good idea. When calculating the values needed to render dynamics (such as the position, speed, and density of particles), random numbers are often used. This is not a problem in itself until it is spread across machines. That is, machines participating in the distributed rendering can generate random values differently, especially if they differ in underlying architecture. Also, since the state of a dynamic system for a frame depends on the state of the system in the previous frame, the final animation can exhibit discontinuities across rendering boundaries. For example, if one machine rendered frames 1 through 10 of a particle system animation, and another machine rendered frames 11 through 20, there may a noticeable break, or jump, in the particles between frames 10 and 11. One way to avoid this problem with particle systems is to use particle disk caching. In particle disk caching, all the calculations are pre-computed and stored in files before rendering begins. All renderers, however, must have access to these files, either through a shared file system or through a local copy of the files. Another way to work around this problem is to bake the dynamics simulation before rendering. With this method, Maya computes and keyframes all the calculated positions before rendering begins, thereby avoiding potential continuity problems. Keep in mind, though, that once the dynamics simulation has been keyframed, you cannot go back. In order to modify the way the dynamics simulation works, you must delete all the keyframes and re-create the dynamics system from scratch.
331
332
C H A P T E R 11 • Distributed Rendering
Tools and Recommendations In this section, we'll discuss some tools and give you some general recommendations for getting the most out of your network rendering system. These tools (included on the CD-ROM accompanying this book) will address the problems we identified earlier, and the recommendations will help you make decisions on how best to set up your network renders.
The Frame Check Tool The Frame Check tool, which you'll find on the CD, was designed to detect dropped frames and incomplete frames on Unix-based systems after rendering is complete. It was created by Karl Rasche at Clemson University during the production of an animation project (Retrofit, 2001) to combat the frustration the team was experiencing with distributed rendering. To run the program, type the following at the command line: frame_check
.
The file_name is the prefix name of the frame filenames, which must be in the format < f r a m e > . < f r a m e _ n u m b e r > (similar to f c h e c k ) . Figure 11.8 shows example output of the program. The program checks for three types of problem frames: • Missing frames • Empty frames • Suspicious frames Missing frames are detected by a skipped frame number in the list of files (for example, b o u n c e 2 . i f f . 9 , b o u n c e 2 . i f f . 10, b o u n c e 2 . i f f . 12). You can easily identify empty frames since they have a file size of 0. Suspicious frames include any frame that is considerably different in file size than the mean file size of the other frames in the directory. You might have to view suspicious files individually to determine that they are complete. Also notice in Figure 11.8 that some additional information is provided: • Elapsed time • Mean time • Est. time remaining
Figure 11.8: The Frame Check tool provides statistics on rendered frames.
Elapsed time is the amount of time that has passed between the oldest and most recently rendered frames and is computed from the time stamps on the files. The Mean time is simply an average rendering time based on the Elapsed time divided by the number of frames. The Est. time remaining indicates how much time is left to render, based on the current frames.
Tools and Recommendations
The Load Scan Tool Before you start a distributed rendering job, it would be nice to know which other jobs are running on the machines in the render pool. You'll want to avoid some machines, such as those on which Maya is running interactively or those that are busy with another rendering task or some other large job. To help you identify such machines, you can use the Load Scan tool (shown in Figure 11.9), also written by Karl Rasche, for Unix-based systems with OpenGL and g t k installed. (You'll find Load Scan on the CD.) To run Load Scan, type load_scan at the command prompt. The Load Scan main window is divided into two sections: Available Machines and Machines to Scan. You can select computers listed in the Available Machines list and drag them to the Machines to Scan area. Clicking the Scan for Life Signs button displays their current load and usage. From these machines, you can identify the computers you want to use for distributed rendering. The scan provides several pieces of information for each machine: u s e r on : 0, 1 o a d , and maya. If a machine is down, the "Not Responding" message is displayed in the u s e r on : 0 column; otherwise, this column displays the name of the user currently logged in to the system. The 1 o a d column gives an idea of the percentage of CPU cycles being used. If this number is high (close to 100.0), you won't get much rendering help from this machine. Finally, the maya column lists whether a Maya render or interactive Maya session is currently taking place. We'll discuss more about this situation in the next section. Recommendations Because Maya rendering is such a resourceintensive task, you need to follow certain guidelines when assigning machines to participate in the distributed render. These guidelines are a direct result of our experiences, and we hope they will help you avoid some of the unhappy situations (and users!) that we've encountered. One of the items shown in the Load Scan display is a field that indicates whether a Maya session (interactive or render) is currently active on that machine. You'll generally want to avoid starting any render job on a machine running an interactive Maya session. Our lGhz Linux boxes with lGB of RAM are barely able to handle Maya interactive sessions on some of our larger scene files with nothing else running on the machine, much less a Maya rendering process. So, recommendation number 1 is: don't run Maya renders on the same machine running a Maya (or some other) interactive session.
Figure 11.9: The Load Scan tool helps you identify machines to use for distributed rendering.
333
334
C H A P T E R 11 • Distributed Rendering
On lesser machines, this problem is exaggerated, as we learned in our days of unregulated distributed rendering. In a perfect world, your co-workers or students would be fairly responsible in launching distributed renders. In the real world, however, an individual can bring a network of machines to its knees, thereby making the machines unusable to interactive users across the network who have never even heard of Maya. Of course, this often happens at night after the systems staff has gone home, and since users can't kill other users' jobs, you come in the next day to one person with a completely rendered animation and a bunch of other people who are not happy campers. This brings us to recommendation number 2: create and enforce rules for distributed rendering if you don't have a dedicated render farm. Our solution was to restrict distributed rendering to night-time hours and to caution (read threaten) users not to use machines on which interactive users were logged in. Even if you have a dedicated render farm, you need to take care in assigning rendering tasks. In general, running two rendering tasks concurrently on the same single-processor machine takes longer to complete than running the tasks one after the other because of the larger overhead of context switching, or the administrative duties that the computer must perform to allow the two tasks to take turns using the CPU. One of these duties is taking a task out of memory and then bringing the other task into memory. The current state of the task might have to be saved to disk, which is extremely time-consuming in computer terms. A situation in which the machine ends up spending most of its time performing context switches is called thrashing. In the world of computer science, thrashing is not good. You can usually tell when your computer is thrashing because you can hear it constantly accessing the disk. To avoid thrashing, follow recommendation number 3: run only one render task at a time on a given machine. If you have multiple distributed renders to perform, partition your network so that no two renders have an overlapping machine pool, as shown in Figure 11.10. These guidelines do not necessarily hold for machines with multiple CPUs,
Figure 11.10: This network of machines has been divided into three mutually exclusive partitions.
The Socks Program
The Socks Program Socks is a distributed-rendering program that runs on Windows NT/2000/XP. As we mentioned earlier, Windows does not natively perform remote operations, so you need thirdparty software to send and receive commands. In essence, this is what Socks does, but only in a specific way. Socks is written in Microsoft Visual C++ 6.0 (MSVC) because it offers a good set of GUI creation tools (Interdev) and contains a lot of high-level objects such as TCP/IP (Transmission Control Protocol/Internet Protocol) functionality. Plus, there's a plethora of documentation and online support for this language. GUI code doesn't generally translate well across operating systems, and this is true also for GUIs created with MSVC. Whatever MSVC lacks in cross-platform compilability, however, it makes up for in ease of use on the Windows platform and the amount of support on MSDN (Microsoft Developer Network found at http://msdn.microsoft.com). We intended Socks to be a small program that perhaps a university, a small company, or just a graphics enthusiast could use in a Windows environment; therefore, it is by no means meant to be used for mission-critical applications (although we'd like to think it could operate in that capacity). Installing Socks Installing Socks is a simple procedure. The only requirement is that you have at least one TCP/IP connection on a machine running Windows NT/2000/XP. As a general rule, any machine that can run Maya can run Socks. To install Socks, follow these steps: 1. Copy S o c k s . z i p (about 80KB) from the CD-ROM included with this book to a directory of your choice. 2. Extract the files. 3. Run the appropriate executable: S o c k s . e x e (the master process) or C 1 i e n t . e x e (the client). Neither executable is designed to run on any version of Windows other than Windows NT/2000/XP; therefore, run Socks under Windows 95/98/Me at your own risk.
How Socks Works There are two parts to Socks: the master and the client. The master is by far the more complicated program and is responsible for coordinating rendering tasks. During a distributed rendering task, the master resides on one machine, and the client processes run on all the other machines participating in the rendering (see Figure 11.11). The client is therefore only responsible for executing the command line render and sending back status updates to the master. Communication between the master and the client processes occurs over TCP/IP port 61276 (the birthday of its creator).
335
336
C H A P T E R 11 • Distributed Rendering
Figure 11.11: Socks is configured with one machine running the master process and many other machines running the client process.
The Master Process Listing 11.1 represents pseudocode for the master process. Since we're dealing with asynchronous, event-driven code, the structure of the program is difficult to express with pseudocode; however, the main functionality of the system is denoted, even though it doesn't necessarily correspond to the actual code. This should give you a rough idea of how the master process works. / Listing 11.1: repeat
Pseudocode for the Socks M a s t e r Process
if ( A d d _ c o m p u t e r _ b u t t o n . C l i c k e d ) ( Add_computer_dialog (New_computer); TCP.connect (New_computer.ip); Add_computer_to_list (New_computer); continues
The Socks Program
if (Delete_computer_button.Clicked) | TCP.disconnect (SelectecLComputer) ; Delete_Computer_from_list ( Selected_Computer) ;
if (Render_button.Clicked) ( Bui1d_Option_String ( Render_Parameters ) ; TCP.Send (Selected_computers, "RENDER" + Render_Parameters) Add_Render_to_list (New Render (Render_Parameters, Selected_computers) ;
if (TCP.Message_Received) 1 switch (TCP.Message) case "STTUS1": Update_Status ( R E N D E R I N G ) ; case "STTUS2": Update_Status (RENDER_SUCCESS); case "STTUS3": Update_Status (RENDER_FAILED); case "CHATS" : p r i n t (TCP.Chats_message); Update__Statistics ();
until
(Quit_program)
Figure 11.12 shows an overview of what the master process does after the Render button is clicked. We'll provide additional details on these steps later. System Requirements
As mentioned earlier, the master process runs on a single computer. This computer doesn't have to be anything high-powered; the only requirement is that it must have TCP/IP installed and be able to communicate with other machines in the rendering pool on port 61276. lf you are running firewalls to protect machines from outside attacks, you will need to grant access to port 61276 in order for this program to work.
In addition, the scene file to render must be on a network drive available to all machines in the render pool. Furthermore, not only must the . m b or . m a file be available to all the machines, any files that are referenced within the scene must be accessible as well. These include all BMPs, JPGs, or other image files used in your scene as textures, masks, or image planes. Any cached particles or dynamics files must be available to the remote computers as well. And don't forget MEL scripts—make sure that all machines have access to any special MEL scripts you've used to control your scenes. Otherwise, you can end up with a large number of incorrect frames that will have to be re-rendered. Obviously, most of these requirements can easily be met if the source files are all located on a file server that each machine can access. Paths to these files must be the same on each machine, such as mapping drive S to a public folder that each machine can access. Having one machine map the public folder as S and another machine map it as T will not work. To
337
338
C H A P T E R 11 • Distributed Rendering
Figure 11.12: The Socks master process performs several tasks during a distributed render.
make a MEL script available to all the machines, save your script within a procedure in the w o r k s p a c e . m e l file in the root folder of your project. This file is read whenever you start Maya or the command line renderer. The Graphical User Interface
When designing Socks, we reviewed other distributed renderers and tried to create a look that was similar so that the GUI would be familiar to users of these other renderers and so that it would be easy for new users to learn. We also made the options dialog box similar to the interface in Maya so that any Maya veteran would find it easy to use. Interestingly, the time devoted to creating the GUI was equal to, if not more than, the time devoted to programming the actual engine that runs the renderer.
The Socks Program
339
The Main Socks Window Figure 11.13 shows the main Socks window. At the top, the IP address of the machine running the master process is displayed next to the Running on IP tag. The remainder of the window is laid out in three distinctive parts that follow the natural flow of submitting renders to the distributed renderer. The first part relates to the file that you want to render; the second part concerns the computers on which you want to render; and the third part contains the options and other information relating to the render job itself. File Options
The first portion of the main Socks window is devoted to options relating to file-level actions. Every element in this portion relates directly to a command line paramFigure 11.13: The main Socks window runs on the master machine and allows eter that is passed to R e n d e r . e x e . you to control many aspects of the distributed render. Obviously, your first task is to choose a file to render. Click the Browse button next to the Maya File to Render field to find the file you want to render, and click the Browse button next to the Windows Project Directory field to find the project. Or simply enter the full pathname of the file in the appropriate field. The remaining parts of this window basically translate into command line parameters that are sent to the remote machines. For example, typing D: \ t e m p in the Output Images to Directory field has the effect of adding -rd " D : \ t e m p " to the arguments passed to Maya's renderer. There really isn't much programming involved; it's merely a matter of matching the GUI elements with the command line arguments passed to R e n d e r . e x e . Clicking the More Options button opens another window of options that we'll discuss later in this chapter. Computer Options
The next portion of the program contains about 80 percent of the code that is not directly related to the interface. This may seem like a disproportionate amount for one list box and six buttons, but hidden beneath those buttons and the list box is a host of functions: • • • • •
The TCP/IP implementation for Socks Error-handling routines for user mistakes and computer failures (lost connections) The communication protocol for the master and client computers File I/O routines for loading and saving settings The code for handling the list box and the six buttons
The list box contains the list of computers that make up the render pool. The columns in the list box contain information that is useful when you are running a render job—the
340
C H A P T E R 11 • Distributed Rendering
computer name, its IP address, the name of the file being rendered, the connection status, and the frame number. With these simple columns, it's easy to see what each computer is up to and whether it has a problem. Values contained in most of the columns are intuitive, except for ID and Status. The number in the ID column is the identification number of the computer within Socks and is primarily used for debugging purposes. The Status column is more involved and can take one of six possible values that have the following meanings: CXN FAILED The computer cannot be reached because the machine is down, not running the client, or has one of many other problems that might be keeping it from communicating. LOST CXN A previously successful connection is terminated between the master and a client machine. LOST CXN can indicate a willful disconnection of the client machine or an error that has occurred in the connection between the master and the client. CONNECTED You have established communication with the client machine, and the client is awaiting orders. RENDERING The client machine is busy rendering. RENDER SUCCESSFUL The client machine has successfully completed the frame that it was assigned. RENDER FAILED The client machine failed to render the frame it was assigned. Currently, Socks does not display the reason for the render crash, but this would be a nice option to add in the future or in your own distributed renderer. The Remote/Local column is reserved for implementation at a later date.
You can add computers to the list in two ways: • You can add one computer at a time by clicking the Add button. • You can use a file that contains a list of machines to add simultaneously. Clicking the Add button opens the Adding a Computer to Render List dialog box, as shown in Figure 11.14, which asks a few questions about the machine you want to add. This information includes the name of the computer (actually, this is for the user's convenience only), the client's IP address, and an indication as to whether to use all the client's processors or only one. This final bit of information is useful because many machines on a render farm can be multiprocessor machines, and Maya's command line renderer defaults to using only one processor. A check box relating to remote rendering is also present, but this check box is currently not implemented. After you click the Add button, Socks attempts to make a connection to the client machine. The result is displayed in the Status column. If you have previously added a whole slew of computers and saved them to a computer list file, you can click the Load button to add all the machines stored in the file. Socks attempts to connect to each machine found in the file. Figure 11.14: Use this Socks window For added convenience, Socks attempts to read in a default list of to add computers to the render pool. machines stored in the file d e f a u l t . c l f (stored in the current directory) on
The Socks Program
startup. This file is saved each time you quit Socks, and it contains all the computers in the list box at termination. Thus, you can easily enter all the machines once and let Socks load them automatically from then on. Alternatively, you can create a text file by hand containing all the machines in the pool. The format of each line in this file is as follows: <1> The Add and Load buttons have their counterparts in the Delete and Clear buttons. To remove a machine from the list, select it and then click the Delete button. Clicking the Clear button removes all the computers from the list. Finally, clicking the Save button saves the current list of computers in a computer list file. You can use this feature to create pools of machines that can later be added to the render farm in stages. Near the bottom of the Socks main window are the controls for the render jobs. They function somewhat similarly to the controls for the computer list. Each of the columns provides the user with some useful information. Table 11.1 lists and describes these columns.
Other Fields
The bottom of the main Socks window contains other useful fields and buttons, as well as some rendering statistics information. Clicking the Render button initiates the render on the Maya file selected according to the options chosen on the machines highlighted in the Available Computers list. You can Shift+click and Ctrl+click to select multiple machines for the render. The selected machines can be used for different rendering tasks, though the order in which a particular machine performs these tasks is determined by the Socks priority, which you can set using the Priority slider located next to the Render button. You can use the Chatting field to exchange messages with users on the client machines. Simply select a machine from the Available Computers list, type the message in the field at the bottom, and click the Send button. Any messages sent to you from client machines are displayed in the larger box with the IP address pre-pended to the message. To use the e-mail capabilities of Socks, you need to fill in the three fields in the SMTP Options sections. Currently, the only e-mail message that Socks sends you is notification that a render job is complete. In the Server field, enter a fully qualified SMTP (Simple Mail Transfer Protocol) server name, such as s m t p . y o u r n e t . c o m . In the Your E-mail Address field, enter the e-mail account that you use on that server. Leaving it blank or using an e-mail address
341
342
C H A P T E R 11 • Distributed Rendering
from another account might not work on that SMTP server. And, of course, you need to fill in the To E-mail Address field with the e-mail address of the recipient of the updates. Once a render is complete, Socks connects to the SMTP server and attempts to send a simple e-mail message notifying the recipient that the render is finished and the time it completed. On the far right are a number of statistics fields useful for monitoring progress of the renders. The Render Progress bar gives a visual indication of how far along the rendering tasks are, based on a percentage of the Total Frames and Frames Completed. Total Frames gives a combined total of all frames that need to be rendered across these tasks, and Frames Completed tells how many of these frames are finished rendering. Number of Renders is the number of currently active rendering tasks. Finally, Active Computers displays the number of client machines that are currently participating in the rendering. When a job is complete, all its statistics are removed from these fields. Clicking the Exit button (located beside the More Options button) exits the Socks system and terminates all connections with the client machines.
Figure 11.15: Use the More Options dialog box to set rendering options for a distributed render.
The More Options Dialog Box If you are familiar with Maya, you'll recognize the fields in the More Options dialog box, which is shown in Figure 11.15. The interface closely mimics the Render Globals dialog box in Maya and provides an easy way to set the most commonly used options. If you leave any option blank, Socks assumes that the option was set correctly in the source file render globals and does not send the parameter to the command line renderer. For all other parameters, the values set here take precedence over those in the scene file. With each render, you must set the Frame Range. Socks uses this range regardless of what is set in the scene file that you choose to render. At the bottom of the dialog box is the Manual Entry section. If you are a Maya expert, you can forgo the GUI and enter your own options (using Override), or you can add some parameters to append to those already specified (using Append). lf you use the Frame Range options in the Manual Entry section, you might render duplicate frames. Socks does not parse the Manual Entry command line, but just blindly sends it to the client.
The Socks Program
If you do not want to pass any parameters to the renderer and want to use the defaults in the file, click the Override radio button and leave the input field blank. This calls the renderer with no command line options but the filename. For information about the rest of the options, see the Maya documentation on rendering.
The Client Process You must run the client on any machine on which you want to run distributed renders. The client is a daemon that resides on the client machines and accepts TCP/IP connections from the master. The client is by far the simpler of the two programs. It basically has only two functions: communicating with the master program and executing Maya's renderer. Contained in these two functions, however, are a number of smaller operations. Listing 11.2 shows pseudocode for the client process. Figure 11.16 provides an alternative view of the client's functions through a workflow diagram. / L i s t i n g 11.2: Pseudocode for the C l i e n t Process repeat
w h i l e (NOT (Connected_to_Master) AND NOT (Quit_program)) ( TCP.Listen (61276); Do_Idle_Operations (); ) w h i l e ( ( C o n n e c t e d _ t o _ M a s t e r ) A N D NOT ( Q u i t _ p r o g r a m ) ) ( TCP.Listen (61276); if ( T C P . m e s s a g e _ r e c e i v e d ) I switch ( T C P . m e s s a g e ) I c a s e " R E N D R " : Start_render ( T C P . r e n d e r _ o p t i o n s ) ; d e f a u l t : print ( T C P . m e s s a g e ) ; }
wait ( 5 ) ; if ( r e n d e r i n g )
(
Check_On_Render (); TCP.Send (Status_Of_Render);
} Do__Idle_Operations ( ) ; \ }
} until Q u i t _ p r o g r a m
343
344
C H A P T E R 11 • Distributed Rendering
Figure 11.16: The Socks client process's main responsibility is rendering.
Figure 11.17 shows the main client dialog box. As in the main master dialog box, at the top, the IP address of the machine running the client process is displayed next to the Running on IP tag. In the Location of Render.exe field, you specify the location of the Maya renderer on the local client machine. The Recent Messages box displays any informational messages from Socks, as well as messages sent from the master machine. If you are a user on the client machine and want to send a message to the master, you can simply enter the text in the Chat with Master field and click the Send button. The Connection Status field displays information about the client's connection to the master process machine, and the Render Status field displays information about the current rendering task. Clicking Quit terminates the client process.
The Socks Program
Communication When the client starts, it creates a socket that listens for incoming connections on port 61276, as mentioned earlier. Once a connection is made between the two machines, the client no longer accepts any connections until the current connection is broken. This arrangement simplifies the client process since handling multiple connections increases complexity exponentially. Future versions of Socks may include the capability to Figure 11.17: The Socks client main window displays handle multiple connections. Fortunately, allowing only one information about current renders. connection eliminates the possibility that two machines running master processes can submit jobs to the same client simultaneously, in turn causing the client machine to run two instances of the renderer. (Recall the earlier discussion in which two instances of the same rendering program fight for CPU time and cause the final render time to be substantially longer than running the two renders back to back.) Having only one connection also allows for an easier implementation of the communication between machines since you don't have to keep track of which master machine you are communicating with. We'll discuss this protocol shortly. Another aspect of communication between the master and client machines occurs in the form of status updates. As soon as the master computer connects to the client, the client starts a timer that sends a status update to the master every five seconds. At these intervals, the client process checks to see if the computer is rendering or idle. If the computer is idle, the client process also determines whether the computer has successfully or unsuccessfully rendered a frame. The five-second interval provides frequent updates to the master, but not so frequent as to clog up the network or add wasted time to the render job. The packet size for these updates is roughly 20 bytes, so bandwidth should not be adversely affected. The Communication Protocol Creating the connection between the machines is easy using Windows' C A s y n c S o c k e t class; the difficult part is creating the language, or protocol, that the two programs will use. Following the communication protocol of SMTP and others, Socks messages take the form of a short command word followed by suboptions and data. A standard packet from Socks looks like the following: Command
Option
5 bytes
1 byte
Data Up to 2042 bytes of data
The Socks Commands Currently only a few command words exist in Socks; however, the structure allows for many more. The commands and options described in the following sections are currently available.
345
346
C H A P T E R 11 • Distributed Rendering
The RENDR Command The master process issues a R E N D R command to start a render on an idle client assigned to the current render job. The format of the R E N D R command is as follows: Command
Option
RENDR
0/1
Data Options that the client will use to render(for example, raytracing, the number of frames, the file format, and so on)
When you click the Render button, the master process builds a data string that represents the options that will be used for the command line renderer on the client machine. This data, along with the correct header, is sent as a packet to the client machine. When the client receives the packet, it strips out the first five characters to determine the command word, and the sixth character to get the option. If the option value is set to 0, a normal render occurs (we'll call this R E N D R O ) . The client machine parses the data string sent to it from the master and combines it with the path to the Maya renderer on the local machine to create a fully qualified command. Recall that this data is composed of the rendering options set in the master GUI. As mentioned previously, these values override those set in the scene file. The client then runs the code in Listing 11.3, which creates the process, sets a P R O C E S S _ I N F O R M A T I O N structure that is useful in other parts of the code, and sets the priority of the render to low. / L i s t i n g 11.3: The Code That Launches the Render in the C l i e n t ret = C r e a t e P r o c e s s ( N U L L ,
/ / c r e a t e the p r o c e s s
commandline, NULL, NULL, FALSE, CREATE_NEW_CONSOLE, NULL, NULL, &StartupInfo, &ProcessInformation) ; / / g e t the h a n d l e for the process to be used to set the p r i o r i t y render_process_handle = ProcessInformation.hProcess; / / s e t t h e p r i o r i t y t o t h e l o w e s t p r i o r i t y i n N T a n d 2000 SetPriorityClass (render_process_handle, IDLE_PRIORITY_CLASS);
After successfully creating the render process, the client sends a STTUS1 message to the master machine. We'll discuss the STTUS command words in the next section.. Although you can send a R E N D R 1 command, it is not yet implemented. The R E N D R 1 command tells the computer to render a file that is local to the client machine and not found on a file server. Distributed rendering of a file that is not on a shared file system is a complex problem and requires a lot more code.
The Socks Program
The PROCESS_INFORMATION Structure When the client calls CreateProcess, it passes a reference to a structure of type PROCESS__ INFORMATION. Inside this structure are a few useful gems of data that you need to know in order to check up on a render. Here's the format of the structure (as described on MSDN): typedef struct _PROCESS_INFORMATIOM | HANDLE hProcess: HANDLE hThread: DWORD d w P r o c e s s I d : DWORD d w T h r e a d I d : 1 PROCESS_INFORMATION; The most useful data is in the h P r o c e s s field, which specifies a handle to the newly created process. (You can find the meaning of the other fields on the MSDN.) With this handle, which the client copies and stores in a private class member, you can set the CPU priority of the render running on the client machine. (This priority is different from the Socks priority explained earlier.)The process handle also lets you determine if the render is still running and, if not, retrieve the exit code of the process. With this information, you can determine if the process terminated cleanly or by error. Of course, this handle also lets you kill the render if the need arises.
The STTUS Command
The STTUS command sends acknowledgments and status updates between the master and client machines. Because these commands are sent much more frequently than others, we want them to be small in size so as not to consume much network bandwidth. The format of the STTUS command is as follows: Command
Option
Data
STTUS
1/2/3
No data
The meaning of a STTUS command depends on the value in the Option field: STTUS1 Tells the master machine that the client has just started a render and will be busy until the client sends another STTUS message telling it otherwise. This message is sent most frequently since the client issues one of these every five seconds during rendering. To conserve bandwidth, the client does not send messages when it is idle. STTUS2 Tells the master machine that the client has successfully rendered an image and is awaiting more commands. STTUS3 Tells the master machine that the client has unsuccessfully rendered an image and is awaiting more commands. The master process places this frame back into the list containing all the frames that still need to be rendered. Unfortunately, there is currently no way to report the reason that the render failed, so the master process tries to send that machine another frame in hopes that this was just a one-time error.
347
348
C H A P T E R 11 • Distributed Rendering
Figure 11.18: The Socks master and client processes communicate through a variety of messages during a distributed rendering task.
Summary
The CHATS Command Although the CHATS command is not vital to the functionality of Socks, it can come in handy when rendering across large networks. The format of the CHATS message is as follows: Command
Option
Data
CHATS
None
A string that you want sent to the master or client computer
The CHATS command allows two users on the network to communicate through text messages. If someone at a client machine is noting a problem with the render or just wants to talk to the person running the master process, the two can chat back and forth using the text boxes provided in each program. Similarly, the person at the master machine can begin a chat session by highlighting a client machine in the main master process window, typing a message, and clicking Send. Figure 11.18 shows an example communication scenario. As you can see, it is a simple protocol (why make things harder on yourself?), but still contains room for more complex message passing in the future. In this example scenario, the master process initially sends a render command to the client to render frame 1 of an animation. Every 5 seconds after that, the client sends back a status message indicating that it's still working on the render. After 30 seconds, the client informs the master process that it successfully completed the render and is now ready for another rendering task. The master process then sends a render command to the client to render frame 4. The client, however, experiences an error with this frame render and sends the master process a status message with the error. At this point, the master process places frame 4 back in the pool of unrendered frames for reassignment and sends the client machine a new rendering task.
Summary In this chapter, we provided information that we hope will help you make better use of your rendering resources. A few simple tools and recommendations can make a world of difference in getting the most out of your render farm. We also included a discussion of Socks, a distributed renderer for Windows machines. As written, Socks can help you with your distributed rendering tasks, but our hope is that you will be able to write your own distributed renderer that will best suit your needs. The source code on the CD-ROM included with this book should give you a good launching point, in combination with the description of the code in this chapter. Before you know it, you could have dozens of people performing distributed rendering on hundreds of machines shooting billions of light rays to render your animation!
349
Index
Note to the Reader: Page numbers in bold indicate the principle discussion of a topic or the definition of a term. Page numbers in italic indicate illustrations.
absolute icon positioning, 276, 277, 277-278 absynthesis.com, 51, 79 Action team tasks, 183-186, 184-185 Akin, Robin, 85 aliaswavefront.com, 26 ambient light, 297-298, 297-298 animatics, 227-228, 228 animation, 3-33, See also lip-synched animations; motion capture difficulty of, 4 drawing, 3, 6-7, 6-7 keyframing and, 58 modeling, See also modeling usingNURBS, 8, 10-16, 10-16 overview of, 3, 8 using polygons, 9, 10 using SubDivision surfaces, 9-10, 10 using trims and blends, 8-9, 10 overview of, 3-4, 33 performance animation, 86 pose-to-pose method of, 32 preproduction stages, 4-7, 6-7 scriptwriting, 3, 4-6 setup for animation speed, 22 arm creation example, 22-32, 23-31
to carry story forward, 21-22, 21 drawings and, 20, 20-21 having/sticking to plans, 20 in lip-synch animation, 151-153, 151-152 overview of, 3, 19-20 perfection versus good enough, 22 stages in, 3-4 texturing consistency, 17 converting NURBS to polygons before, 18 using files on CD, 19 lip-synched animation, 153-156, 155, 157 mapping to light attributes, 295-297, 296-297 NURBS versus polygons, 16-18, 17 overview of, 3, 16
351
352
Index
envelope attribute, 135 of light Ambient Shade, 297-298, 298 color, 295-297, 296-297 decay rate, 296-297, 297, 299-300, 300 Emit Diffuse/Emit Specular, 296, 296 intensity, 295, 296 ramp, 295, 296-297 lip-synch audio attributes, 160, 160-161 locking and hiding, 245-246, 246 non-animation, locking, 108-109, 108 audio. See lip-synched animations
B baking simulation data, 199, 209-210, 210, 220 bind pose, 153-154 binding arm mesh to skeleton, 26, 27 Blend Shape dialog box, 161, 161 blend shapes. See lip-synched animations bounce lights, 315, 315-318, 317-318 Boundary tool, 11-12, 11 Bublitz, Ron, 264 buttons, See also controls Deselect All button, 285-286 in particle simulations, 257-258, 258 radio buttons, 273-274, 274, 280 Select All button, 285-286 symbol buttons, 274
in Photoshop, 19, 19 planning, 17 polygons, 18-19, 18-19 reference objects, creating, 154-155 thumbnailing, 3, 4 time constraints and, 4 area lights, 299-300, 300 arm example. See setting up for animation arrays, 283 Attach Surfaces Options dialog box, 14, 14 Attribute Collection script, 26 attributes, See also color centralizing control of, 243-245, 244 character size/proportion, 181-183, 182
c caching dynamic data, 199, 216, 331 Cameron, James, 89 carapace example. See organix modeling cartoon character example. See lip-synched animations CD-ROM files ArmSetupStart.ma, 23 base_humanoidShape.tga, 174 Blobbyman.ma, 30-31, 31 Bonus_Binaries, 81 breakFinal.mov, 212, 213 breakOutFinal.mov, 221, 221
Index
breakTakel.mov, 206 breakTake2.mov, 207, 208 breakTake3.mov, 209, 209, 211 breakTake4.mov, 211 bumpComplete_perspView.mov, 105 bumpComplete_sideView.mov, 105 bumpLeftLegPlat_sideView.mov, 102 bumpLeftLeg_sideView.mov, 102 bumpLeftLegsOnly_sideView.mov, 102 burlapcUV.tga, 19 burlapsq.tga, 19 burlapsq.tif, 16 carapace2.mb, 56 carapace.mb, 54 ch2horse_ref.tif, 35, 36 ch2horsewalk.mov, 48 charGUI_icontextcheckbox.mel, 276 charGUI_radio.mel, 274 Chinese_Dragon_Final.mb, 77 chinese_dragon_revisited320Qtsoren3. mov, 77 Chinese_Dragon_RevisitedA.mb, 72, 74,76 Chinese_Dragon_RevisitedB.mb, 74-75 Chinese_Dragon_RevisitedC.mb, 77 ColorMap.tga, 19 conel.mb, 59 cones3b.mb, 64 cones3c.mb, 64-65 cones3.mb, 62 ControlBox.mel, 22 floursackdismembered.ma, 11 Mcblended.mb, 140 Mcblendshapes.mb, 130-131 Mcheadonly.mb, 130-131 Mcreadytotalk.mb, 144
Mctarget.mb, 136 Mctargetshapes.mb, 135 MClast.mb, 144 MCmagnifique.mov, 129, 148 MCmagnifique.wav, 144, 146 mrl26b024_crestMist.mov, 257, 258 mrl35104_breakdown2.mov, 229, 229 paneShatter.mov, 218 puppet.mb, 320 pwl73020_runOff.mov, 237, 237 pwl73020_sideSplash.mov, 238, 238 rigidTableFinal.mb, 209 rigidTableStart.mb, 202, 204 Rox_Audio.wav, 160 Rox_Final_No_Anim.mb, 157-158, 160 Rox_Final_Ready_for_Texture.mb, 154 Rox_Snout_Model.mb, 151 Rox_Snout_Model_With_Bones.mb, 151 Rox_Talks.mov, 163, 163 Rox.tif, 150, 154 sackUVone.ma, 18, 19 sackUVtwo.ma, 18, 19 scary_face.mb, 66-67, 66-67 set.mb, 300 simplePaneExample.mb, 218 SingleStroke.mb, 69-70 Snap.mel, 115, 115 sportPalace.ma, 186, 187, 188, 189 Squirm320QTsoren3.mov, 58, 59 structureDeform.mov, 217 Walk.mb, 95 waveComplete_perspView.mov, 113 waveR_collar_perspView.mov, 111
353
354
Index
waveR_elbow_perspView.mov, 111 waveR_shoulder_perspView.mov, 111 waveR_wrist_perspView.mov, 111 woodenDoll.mb, 274, 278, 279, 284 Xfrog_primed320QTsoren3.mov, 79 Zoid crowd animations, 192 Zoid_16arr_hue.ma, 179 Zoid_16arr_size.ma, 181, 182 Zoid_animated_good.mb, 183, 188, 191 Zoid_animated_orig.ma, 183 Zoid_base.mb, 172, 174, 179 Zoid_crowdAnimator.mel, 189 Zoid_hueVar.mel, 175, 179, 181, 186, 191 Zoid_rowCreator.mel, 186, 188 Zoid_sizeVar.mel, 181, 182, 186, 191 tools Frame Check, 332 Load Scan, 333 Planarize plug-in, 16 Socks.zip, 335 Xfrog 3.5 demo, 77-79 trying out scripts on, 285 CenterofGravity control, 31, 32 Channel box, 97, 97, 98, 99, 100-101, 101-103 Character node controls, 31, 31-32 character sets, creating, 107, 142, 342 CHATS command in Socks, 349
chatting field in Socks, 339, 341 check boxes, 272-273, 273 Chinese Dragon example, 72-77, 73-76 client process in Socks, 335, 336, 343-349, 344-345, 348 clips. See lip-synched animations; motion capture clusters, creating, 155, 155-156, 157 color coding in particle simulations, 241-242, 242-243, 243 in crowd scenes, 173-174, 174, 178-181, 180-181 of light, mapping texture to, 295-297, 296-297 commands in Socks, 345-349, 348 computational fluid dynamics, 235, 236, 239-240 Connection Editor, 310, 311 "The Conquest ofForm" animation (Latham), 51 context switching, 334 Control box, 96-97, 97 control vertices (CVs), 8 controls, See also buttons animation, displaying, 96-98, 97-98 animation, placement of, 30-32, 31 graphic controls check boxes, 272-273, 272 icons, positioning, 275-278, 277 icons, types, 274-275, 275 overview of, 272 radio buttons, 273-274, 274 symbol buttons, 274 copies versus instances, 55, 55, 59 Create Blend Shape Options dialog box, 158, 159 Create Clip Options dialog box, 112, 112, 125, 325 Create Render Node dialog box, 295, 296 crestMist example. See dynamic particle simulations crowd scenes, 167-193 Action team tasks, 183-186, 184-185 biggest problems, 172 directing audience attention, 169-170 Genesis team tasks defined, 171, 172
Index
Maya ASCII files and, 174-175, 175 Maya binary files and, 174 modelingcharacters, 172-173, 173 running scripts, 179 script architecture and, 175-178 shader trees/colors, 173-174, 174 varying character size/proportion, 181-183, 182 varyingcolors, 178-181, 180-181 heroes and, 169-170 Horde team tasks defined, 171, 186 Maya ASCII files and, 188-189 Maya binary files and, 189 positioning crowd members, 186-188, 187-189 setting status tags, 189-190, 190 varying crowd member animations, 189-191,190-191 improving, 192-193 manual vs. procedural methods, 168-169, 169 modifying flexibly, 168-169, 191, 792 namingcomponents, 173-174 overviewof, 167-168, 193 patience and, 172 task forces for animating crowd as whole, 186-191, 187-192 animating variable characters, 183-186, 184-185 modeling variable characters, 172-183, 173-175, 180-182 overview of, 170-171 CVs (control vertices), 8
D D'Arrigo, Emanuele, 167 Davis, Timothy A., 325 decay rate of light, 296-297, 297, 299-300, 300
depth map shadows, 304-306, 304-306 Derakhshani, Dariush, 119 diffused light, 309-312, 310-311, 314-315, 314-315 dinosaur shattering glass. See dynamic rigid body simulations
directional lights, 298, 307, 307-308 dissipating shadows, 308, 309 distributed rendering, 325-349, See also rendering defined, 325 dynamic simulations and, 199, 331 Frame Check tool, 332, 332 at frame level, 327, 327 at image level, 326, 326 Load Scan tool, 333, 333 using Maya Dispatcher dropped frames problem, 330-331 dynamics problems, 199, 331 Host Configuration window, 328-329, 329 incomplete frames problem, 331 interactive session problems, 333-334 Jobs menu, 329, 330 overview of, 326, 327-328 Pool menu, 329-330, 330 rendering licenses, 330 Save/Render window, 328, 328 sending packets of frames, 328 setting maximum jobs, 329, 329 specifying idle time, 329, 329 Submit Job window, 328, 32S The_Dispather_Interface window, 328-330, 329-330 thrashing problem, 334, 334 overview of, 325-326, 349 recommendations, 333-334, 334 using Socks CHATS command, 349 chatting field, 339, 341 client process, 335, 336, 343-349, 344-345, 348 commands, 345-349, 348 computer options, 339-341, 339-340 defined, 335 exiting, 339, 342 file options, 339, 339 GUI, 338 installing, 335 main client window, 344, 345 main master window, 339-342, 339-340 master process, 335-343, 336, 338-340, 342
355
356
Index
master/client communication, 345-349, 348 MEL scripts and, 337-338 monitoring progress, 339, 342 More Options dialog box, 342-343, 342 PROCESS_INFORMATION structure, 347 render job options, 339, 341 RENDR command, 346, 348 SMTP options, 339, 341-342 STTUS command, 347, 348 system requirements, 337-338 drawing, 3, 6-7, 6-7, 20, 20-21 Duplicate Options dialog box, 60-61, 61, 70, 70-72, 72 dynamic particle simulations, 223-261 advanced techniques balancing forces, 248, 248 creating buttons, 257-258, 258 custom fields, 252, 252 using emit command, 249, 250 emitting particles in right places, 248, 249 overview of, 247
for rendering, 254-257, 254-258 summing force vectors, 251, 251 for turbulence, 251, 251, 252-253 choosing simulation methods artistic motives and, 234, 234 by brainstorming, 239 computer graphics, 233, 233 fluid dynamics, 235, 236, 239-240 laying out options, 234-235 Maya versus custom software, 247 overview of, 231 particle simulation, 239-240 practical techniques, 225, 232-233, 232 real-time feedback and, 241 separating elements and, 235-236, 236-239 and testing, 246-247 distributed rendering problems, 199, 331 final results, 259-260, 259-261 levels of control in, 246-247 overview of, 223, 224-225, 260-261 preproduction: R&D estimating resources/time, 230-231 gathering visual references, 224-226, 224-227 interpreting storyboards, 226-229, 228-230 mist types and, 230 naming elements, 230 overview of, 223-224 planning, 226-231,228-231 previewing animatics, 227-228, 228 pros and cons of, 239-240 techniques for controlling centralizing attributes, 243-245, 244 color coding, 241-242, 242-243, 243 global variables, 244, 245 locking/hiding attributes, 245-246, 246 numeric displays, 241-242, 242 overview of, 241 spline controllers, 242-243, 243 testing/documenting, 258-259 dynamic rigid body simulations, 195-221 baking simulation data, 199, 209-210, 210, 220 caching simulation data, 199, 216
Index
freezing transformations, 215 grouping bodies in separate layers, 199 in The Matrix, 196 overview of, 195-196, 221 planning, 196-197 pool break example backspin error, 209 creating break pattern, 208-211, 209-232 developing rigid bodies, 202-208, 203-20« fixing floating balls, 204, 205 fixing jumping balls, 205-206, 206, 208,208 fixing skating balls, 206-207 integrating/rendering, 212, 223 overview of, 201 planning the shot, 201-202, 202 postproduction: "cheating" shots, 199-200, 200 preproduction: R&D, 197-198 problems in deformations, 216-217, 227 in distributed rendering, 199, 331 fixing in postproduction, 200 interactivity, 216 interpenetrations, 204, 206, 206, 215-216 nonrepeatability, 216 reversed normals, 203, 215-216 production: getting "shots", 198-199 rigidSolver Step Size setting, 207 shattering glass example adding gravity, 219 integration/rendering, 220-221, 222 overview of, 212-213 planning,213-214,224 primary shatters, 217, 219 production, 218-220, 229-220 research & development, 215-218, 225,227-22S secondary shatters, 217-220, 218-220 speeding up, 199 when to use, 196
E emit command, 249, 250 Emit Diffuse/Emit Specular attributes, 296, 296 extraordinary points, 40-41, 42, 46, 48
F f-curves, adjusting, 99, 200, 113, 223 feature films Final Fantasy, 86, 87, 90-91, 92 Godzilla, 87 A League of Their Own, 4 listed, 87, 168 The Matrix, 196 The Perfect Storm, 223-261 Snow White, 89 Star Wars: Episode I, 165, 223 Titanic, 86, 87, 89 Twister, 254, 254 Fernandez, Florian, 95, 304 filllight,318-319,329
filters in cleaning mocap, 92, 92-93 Final Fantasy film, 86, 87, 90-91, 92 FK and IK icons, switching between, 281-284 flipping in mocap, 93 flour sack example. See NURBS fluid dynamics, 235, 236, 239-240 Frame Check tool, 332, 332 Frame Extensions, 154
357
358
Index
symbol buttons, 274 overview of, 263-264, 289 planning cutting icons, 267-268, 268 naming conventions, 265-267, 266 overview of, 264-265 procedures arrays and, 283 creating Select All/Deselect All buttons, 285-286 exchanging icon images, 284-285 overview of, 278 scriptJobs and, 280, 287 switching between FK and IK icons, 281-284 updating GUI, 278-281, 286-288 GUI in Socks, 338 frames. See distributed rendering Frazier, John, 257 Freeform Fillet Options dialog box, 12, 12 freezing transformations, 215
G Genesis team. See crowd scenes global stitching, 14-16, 15-16 global variables, 244, 245 Godzilla film, 87 Graph Editor adjusting f-curves, 99, 100, 113, 113 creating lip-synched vocal tracks, 126, 126 reducing mocap noise, 92, 92 graphic controls. See controls Greenworks Xfrog, 77-79, 78-79 GUI creation for animators, 263-289 adding user help features, 288-289 creating window check boxes, 272-273, 273 graphic controls, 272-278, 272, 274-275,277 icon positioning, 275-278, 277 icon types, 274-275, 275 layouts, 269-270 optionMenu, 270-271 overview of, 268-269 radio buttons, 273-274, 274, 280
H Hain, Thomas, 183 HeadandNeckControl, 31, 32 Helms, Robert, 195 HipControl, 31, 32 Horde team. See crowd scenes horse modeling example. See SubDivision surfaces Host Configuration window, 328-329, 329 HSVcolor, 180 HyperShade tool, 302, 302-303, 310, 310-311
I\ icons, See also GUI creation cutting, 267-268, 268 exchanging images, 284-285 FK and IK, switching between, 281-284 positioning, 275-278, 277 typesof,274-275,275 The Illusion of Life: Disney Animation (Thomas and Johnston), 89 Industrial Light and Magic, 223, 232 installing Socks, 335 instances versus copies, 55, 55, 59 intensity curves in light, 300-303, 301-303 intensity of light, 295, 296
Index
J
Jamaa, Elie, 16 Johnson, Rebecca, 119 Johnston, Ollie, 89, 90
K Keller, Phil, 228 keyframe animation. See motion capture kissNwave example. See motion capture Kundert-Gibbs, John, 119, 126, 195, 264 Kunzendorf, Eric, 3
L Latham, William, 51 lattice, 27-28, 27-29 Layer Editor, 93, 94, 97, 97 layouts in GUIs, 269-270 A League of Their Own film, 4 Lee, Peter, 126, 264 lighting, 293-323 Ambient Shade attribute, 297-298, 298 color attribute, 295-297, 296-297 creating bounce light, 315-318, 315, 317-318 diffuse lamp light, 309-312, 310-311, 314-315,314-315 filllight,318-319,319 harsh lamp light, 312-314, 312-314 intensity curves, 300-303, 301-303 moonlight, 307, 307-30S for moving characters, 306, 320-322, 321-322 reflections, 322-323, 323 decay rate attribute, 296-297, 297, 299-300, 300 design, 293-294 duplicating, 301, 302 Emit Diffuse/Emit Specular attributes, 296, 296 HyperShade tool, 302, 302-303, 310, 310-311
intensity attribute, 295, 296 Light Linking, 300, 320 overview of, 293, 323 passes, 294
properties, 294-295 RAM and, 295 ramp attributes, 295, 296-297
rigs,319-322,321-322 shadows depth map shadows, 304-306, 304-306 directional light with, 307, 307-308 dissipating, 308, 309 overview of, 304 raytrace rendering of, 298, 304, 308-309, 309 scanline rendering of, 308, 309 texture, mapping to attributes of, 295-297, 296-297 types ambient light, 297-298, 297-298 area light, 299-300, 300 directional light, 298 point light, 298 spotlight, 298-299, 299 lip-synched animations, 119-163 adding personality to vocal tracks, 127-128 creating vocal clips, 124-126, 125 creating vocal tracks, 126-127, 126 fine-tuning timing, 127 head replacement techniques, 149 humanoid cartoon characters adding eye/forehead movement, 137-138, 138 adding target shapes to blend shapes, 136-137, 137 attaching facial surfaces, 140, 141 base shapes, 135, 137 controlling blend shapes, 135 controlling face as whole, 140-142, 141
creating blend shapes, 124, 124, 133-140, 134-135, 137-139 creating blend shapes using groups, 137,139-140 creating character sets, 142, 142 creating happiness/surprise, 138-139, 139 creating head/face shapes, 129-130, 130
creating lip-synch libraries, 123-124, 142-143
359
360
Index
creating mouth blend shapes, 133-137,134-135,137 creating neutral poses, 130-131, 131 creating sentences from word clips, 146-147 creatingtargetshapes, 123, 124, 124, 135,135 creating word clips from blend shapes, 143-145, 144-146 cycling clips, 147 defining, 129 detaching facial surfaces, 131-132, 132-733 envelope attribute, 135 importing sound files, 146 overview of, 122-123, 128 planning, 132 preparing for blend shape creation, 130-132, 131-132 rigging for blend shapes, 123-124 scaling clips to match words, 147-148, 747 steps in, 128-129 testing/tweakingresults, 146, 148-149, 749 using Trax Editor, 146-148, 147 integrating with body animation, 128 making real animals talk blending mouth shapes to sounds, 160-161, 161 creating blend shapes, 154, 156-159, 157, 159 creating clusters, 155, 755, 157 creating snout texture, 154
creating texture reference objects, 154-155 defining motivation, 159 fitting textures/models to snout, 155-156,155,157 frame extensions and, 154 importance of bind pose, 153-154 modeling snout replacement, 150-151 overview of, 149-150, 159, 162-163, 163 playblasting, 160 rendering snout, 162, 162 replacing whiskers, 162 setting audio attributes, 160, 160-161 setting Parallel deformations, 158, 759 setting up for animation, 151-153, 751-152 texture setup, 153-154 troubleshooting blend shapes, 158-159 manipulating facial surfaces, 122, 123-124, 124 manipulating jawbones/muscles, 122, 123, 723 modelingheads/mouths, 122, 122 non-speaking poses, 127 overviewof, 119-120, 163 planning, 132 preproduction tasks, 120 recording vocal tracks, 120-121 rigging, 122-124, 123-124 Load Scan tool, 333, 333 locking and hiding attributes, 245-246, 246 non-animation attributes, 108-109, 108 rotation and scale values, 98, 99
M magnetic mocap systems, 86 master process in Socks, 335-343, 336, 33<S-340, 342 master/client communication in Socks, 345-349, 348 Mastering Maya 3 (Lee and Kundert-Gibbs), 107, 126 Mastering Maya Complete 2, 72, 73 TheMatrixfi\m, 196
Index
Maya 4.5 Savvy (Kundert-Gibbs and Lee), 264 Maya ASCII (*.ma) files, 174-175, 175, 188-189 binary(*.mb)files,174,189 versus custom software, 247 shortcomings, 78-79 Maya Dispatcher. See distributed rendering MEL (Maya Embedded Language), See also GUI creation overview of, 263-264 Snap tool, 115 Socks and, 337-338 sourcing, 115 model walking examples. See motion capture modeling, See also animation; organix modeling crowd scene characters, 172-173, 173 in lip-synched animation, 122, 122, 150-151 using NURBS, See also NURBS creating basic shapes, 35-37, 36-37 defined, 8 flour sack example, 10-16, 10-16 overview of, 3, 8 using polygons
converting NURBS to, 18, 37-38, 38 converting to SubDivision surfaces, 40^2, 43, 45-46, 46 defined,9, 10 Paint Effects workaround, 66-68, 66-69 texturing, 18-19, 18-19 texturing NURBS versus, 16-18, 77 using SubDivision surfaces, See also SubDivision surfaces defined,9-10, 10 hierarchy feature, 39 horse example, 35-49, 36-49 overview of, 35 using trims and blends, 8-9, 10 Monsieur Cinnamon example. See lip-synched animations moonlight, 307-308, 307 More Options dialog box, 342, 342-343 motion capture, 85-117 cleaning blending with keyframe animation, 93-94,94
361
362
© 2001 FFFP
Index
defined, 91-92 using filters, 92-93, 92 marker noise, 92-93, 92 marker occlusion/flipping, 93 over-cleaning, 93 twisted joints, 93 defined, 85-86 in feature films, listed, 87 in Final Fantasy, 86, 87, 90-91, 91 human actors and, 87-88 using with keyframe animation bycombining, 115-117, 115-116 by layering, 106-114, 108-113 by offsetting, 94-106, 96-706 overview of, 85, 86, 90 layering keyframe animation over with Trax Editor adjusting f-curves, 113, 113 animating disabled clips, 107-108, 113-114 animating enabled clips, 107, 109-112, 7 7 0 , 114 creating character sets/clips, 107 creating kissNwave clip, 112, 112 locking non-animation attributes, 108-109,?08
overview of, 106, 114 set keys for animation range, 109, 110 set keys for kissNwave animation, 109, 111-112 shifting start time of kissNwave clip, 112-113 workflow steps in, 107 magnetic systems for, 86 model walking examples adding kiss and wave, 106-114, 108-113 over bump in terrain, 94-106, 96-106 tripping over box, 115-117, 115-116 offsetting keyframe animation over adjusting f-curves, 99, 100 adjusting "skin fit", 95 animating offset controllers, 98-99, 700 animating walking over bumps, 100-104, 101-106 displaying controls, 96-98, 97-9S loading scene file, 95 locking rotation/scale values, 98, 99 overview of, 94-95, 105-106 setting preferences, 95-96, 96 optical systems for, 86
Index
overview of, 117 performance animation and, 86 planning, 91 using as reference, 88-91, 91 resources, 90 rotocapture, 89 rotoscoping, 89 using straight out of box, 88-91, 91 in Titanic, 86, 87, 89 websites, 90 when to use, 87-88 Muybridge, Eadweard, 90
N naming conventions for crowd scenes, 173-174 in GUI creation, 265-267, 266 for particle simulations, 230 network rendering, 325 noise in mocap, cleaning, 92, 92-93 numeric displays of particles, 241-242, 242 NURBS, See also modeling cone clusters, animating, 64-65 cones, abusing, 59-63, 59-61, 63-64 converting to polygons, 18, 37-38, 38 creating basic shapes with, 35-37, 36-37 defined,8 flour sack modeling example attaching fillet geometry, 14, 14 deleting fillet history, 13 fillet blending between isoparms, 12-13, 12-13 global stitching, 14-16, 15-16 overviewof, 10-11, 10-11 rebuilding fillet geometry, 13, 13 replacing end caps, 11-12, 11 texturing, 16-18, 17 troubleshooting, 15-16 as proxy polygons, 66-68, 66-68
0 occlusion in mocap, 93 offset animation. See motion capture optical mocap systems, 86 optionMenu in GUI creation, 270-271 organix modeling, 51-81
carapace object example abusing NURBS cones, 59-63, 59-61, 63-64 adding simple animation, 56-59, 58 animating NURBS cone clusters, 64-65 instances versus copies, 55, 55, 59 overview of, 54-56, 54-57 inspired by nature, 52-54, 52-53 overview of, 51-52 using Paint Effects tool Chinese Dragon example, 72-77, 73-76 creating objects, 69-72, 70, 72 crossing multiple surfaces, 67, 67 Outliner and, 71 overview of, 65-66, 68, 74, 75-76 polygons workaround, 66-68, 66-69 Sims, Karl, 79, 80 using Xfrog, 77-79, 78-79
Paint Effects tool. See organix modeling Paint Skin Weights tool, 152-153, 152-153 "Panspermia" animation (Sims), 79, 80 Parallel Deformation Order, 158, 159 The Perfect Storm. See dynamic particle simulations Peterson, Wolfgang, 232 "Pinos Mellaceonus" image (Smith), 77, 78 playblasts, 146, 160 point lights, 298 polygons, See also modeling converting NURBS to, 18, 37-38, 38 converting to SubDivision surfaces, 40-42, 43, 45-46, 46 defined,9, 10 Paint Effects workaround, 66-68, 66-69 texturing, 18-19, 18-19 texturingNURBS versus, 16-18, 17 pool break example. See dynamic rigid body simulations Preferences dialog box, 96, 96 previewing 3D animatics, 227-228, 228 PROCESSJNFORMATION structure, 347 properties. See attributes
363
364
Index
R R & LankleControl, 31, 32 R8cLhandControl,31,32 radio buttons, 273-274, 274, 280 ramp attributes of light, 295, 296-297 Rasche, Karl, 332, 333 raytrace rendering, 298, 304, 308-309, 309 Rebuild Surface Options dialog box, 11, 11, 13, 23 reflections, creating, 322-323, 323 relative icon positioning, 276-277 render farms, 325 rendering, See also distributed rendering lip-synched animations, 162, 762 particle simulations, 254-257, 254-258 raytrace rendering, 298, 304, 308-309, 309 scanline rendering, 308, 309 RGBcolor, 180, 180 Richards, Jacob, 325 rigging lip-synched animations, 122-124, 123-724 rigid bodies. See dynamic rigid body simulations Ritlop, Frank, 293 rotation values, locking, 98, 99 rotocapture, 89 rotoscoping, 89
Sakaguchi, Hironobu, 90 scale values, locking, 98, 99 scanline rendering, 308, 309 Scientific American magazine, 239 Scott, Remington, 85 scripts, See also GUI creation; MEL architecture, 175-178 Attribute Collection script, 26 on CD-ROM, trying out, 285 running, 179 scriptJobs, 280, 287 scriptwriting, 3, 4-6 Set Driven Key command, 9 setkeys,109,770, 111-112 setting up for animation, See also animation arm creation example bind arm mesh to skeleton, 26, 27 correct wrist collapse, 27-29, 27-28 create arm IK and Point constrain to hand, 25, 25 create biceps muscle, 29-30, 30 finish arm bone setup, 23-25, 23-24 overview of, 22-23 place controls, 30-32, 31 to carry story forward, 21-22, 21 drawings and, 20, 20-21 having/sticking to plans, 20 in lip-synching, 151-153, 151-152 overview of, 3, 19-20, 33 perfection vs. good enough, 22 for speed, 22 shader trees/colors, 173-174, \ 74 shadows. See lighting shattering glass example. See dynamic rigid body simulations ShoulderControl, 31, 32 Sims, Karl, 79, 80 simulations. See dynamic single skin objects, 8 skinning methods, choosing, 20 Smith, MarkJennings, 51 SMTP options in Socks, 339, 341-342 Snow White film, 89 Socks. See distributed rendering sound tracks. See lip-synched animations spline controllers, 242-243, 243
Index
spotlights, 298-299, 299 Square USA, 87, 90-91 Star Wars: Episode I film, 165, 223 statustags,189-190,390 storyboards, 226-229, 228-230 STTUS command in Socks, 347, 348 SubDivision surfaces, 35-49, See also modeling defined,9-10,10 hierarchy feature, 39 horse modeling example behind and back leg, 45-46, 46-47 body and front leg, 44-45, 45 converting NURBS to polygons, 37-38, 38 converting polygons to SubDivision surfaces, 40-42, 43, 45-46, 46 creating basic NURBS shapes, 35-37, 36-37 ear area, 43-44, 44-45 extraordinary points and, 40^41, 41, 46,48 eye area, 41-43, 42-43 final refinements, 46, 47^9, 48-49 making patches quadrangular, 40-42, 42-43, 46, 47
mouth area, 39-41, 39-41 overview of, 35, 39, 49 Submit Job window, 328, 328 summing force vectors, 251, 251 Sylvester, Mark, 54 symbol buttons, 274 systematic multireplication, 52
T "Tackling Turbulence with Supercomputers" article, 239 TDs (technical directors), 19-20 texturing, See also animation consistency, 17 converting NURBS to polygons before, 18 using files on CD, 19 lip-synched animation, 153-156, 155, 157 mapping to light attributes, 295-297, 296-297 NURBS versus polygons, 16-18, 17 overview of, 3, 16 inPhotoshop, 19, 19 planning, 17 polygons, 18-19, 38-19
365
366
Index
reference objects, creating, 154-155 Thomas, Frank, 89, 90 thrashing, 334 3D animatics, 227-228, 228 thumbnailing, 3, 4 TimeSlider, 160-161, 160 Titanic film, 86, 87, 89 Translate node, 31, 31 Trax Editor, See also motion capture creatingvocalclips, 124-126, 125 lip-synching blend shapes, 146-148, 347 lip-synching vocal tracks, 126-127, 126 trim and blend modeling, 8-9, 10 turbulence, 253, 251, 252-253 twisted joints in mocap, 93 Twister film, 254, 254
w walking examples. See motion capture websites absynthesis.com, 51, 79 aliaswavefront.com, 26 flo3d.com, 95, 304 highend3d.com, 26, 264 on Karl Sims, 80 Microsoft Developer Network, 335 Perfect Storm video clip, 233 SIGGRAPH simulation, 215 Werth, Susanne, 263
X Xfrog program, 77-79, 78-79
V voice overs. See lip-synched animations
z Zargarpour, Habib, 223
What's on the CD The CD-ROM that accompanies this book is packed with useful material to help you improve your Maya skills. In addition to a working demo of xFrog (a 3D modeling package that creates some very cool organic models), we have scene files, animations, and even source code relating to the chapters in the book, including: • Models, textures, and animation of a flour sack • A completed subdivision surface horse, plus animation of it walking • "Organix" scene files and animations • Motion-capture data and a pre-rigged skeleton to place it on • Models and animation demonstrating various lip-synching techniques • Models and MEL code for generating crowd scenes, plus animation of the crowd doing "the wave" • Models demonstrating rigid body solutions to a billiards break and shattering glass, plus animation of various stages of work on these scenes • A simplified version of the "crest mist" generating scene used in The Perfect Storm • MEL code for generating character animation GUI controls • Scenes with models and textures for use in practicing lighting technique • A completely functional Windows-based distributed rendering solution, plus source code for it You won't have to create entire scenes from scratch for each chapter, because the scenes and animations on the CD will get you started and help guide you to appropriate solutions quickly and efficiently. Please check the Maya: Secrets ofthe Pros page at www.sybex.com for updates to the CD material.
If you get lonely in the studio, Emanuele D'Arrigo (freelance artist) can teach you how to create a whole crowd from a single base model, as well as how to produce believable crowd motion by adding random variety to your characters' actions. Page 167
Rack 'em up! John Kundert-Gibbs and Robert Helms of Clemson University tame Maya's built-in dynamics engine within budget and on the clock. With a little practice, you can produce controllable effects for specific animation needs, such as the billiards-break exercise in this chapter. Page 195
Habib Zargarpour of Industrial Light and Magic is awash in the complexity of particles as he shares his experience creating the realistic wave mist from The Perfect Storm. Page 223
Susanne Werth of Childrens TV in Germany steps you through creating a template GUI to control animation of nearly any character you're likely to run into. Page 263
Frank Ritlop ofWeta Digital illuminates the delicate task of lighting with a thorough breakdown of the layers of lights you need to bring reality to your scene. Page 293
Timothy A. Davis and Jacob Richards of Clemson University provide their Windows-based distributed renderer on the CD and then show you how to use it to improve your own rendering. Page 325