I have the privilige of being enrolled in a workshop for painting that is being taught by Don Seegmiller! Our first week is almost over and I thought I would post some of my work. The emphasis of the first week is the ‘eye.’ So I have been practicing drawing eyes. I’ll post them in chronological order. The top-most image is the earliest. You can click on the image to get a slightly larger version. I’m finding this very challenging as I haven’t done much painting. It is all digital painting using Painter.

This one has a bit more attitude. We were to try to limit the amount of time we spend on each study. On average about 20 to 30 minutes.








This one I did following along as Don painted a similar creature.








I sketched this creature fairly quickly and then spent time on getting the eyes to read.









We are encouraged to use reference images for our paintings. I used a tiger eye for this one. I think it took me more time to figure out how to make it look ‘furry’ than to paint the eye. So much to learn!





Well, I couldn’t resist. ‘The’ classic walking eyeball! I didn’t have time to finish off his arms and legs… remember, only 20 to 30 minutes and I’m still pretty slow at this.








This is my ‘eye’ creature. I had a lot of fun with this character (both in design – which was really quick) and in painting him. I entitle it ‘Boy Snail sees Girl Snail’. The creature had to be a monster. He does look kinda scary… I think.









Web page tweaks…

It was brought to my attention by a kind soul that most of my embedded video clips on my website were no longer functioning. I suspect it may be the new Internet Explorer or an IE/Quicktime combination. Either way, those links ought to be working now! If anyone finds anything out of the ordinary, please let me know!


This was an exercise in using Nurbs. Most of my modeling skills have been limited to polygons and subdivision surfaces so I decided to delve into the mysteries of Nurbs – which, in case you were wondering are “Non-Uninform Rational B Splines”… right… Anyway, I’m not 100% happy with the final model, but it was a learning experience. I’ve discovered that Nurbs do have strengths, and I’ve capitalized on those strengths when I created my hover bike (see below). It takes a bit of a different mind set when creating Nurbs objects (as compared to modeling with polygons). You need to think more in profile curves and surface curves, but these things really do lend themselves to automotive types of models. You can click on the image for a larger version of the image. Happy modeling!

Hover Bike

Just for fun (and to learn more about Mental Ray), here is a hover craft that I created from concept to completion. In the interim I’m calling it a Hover Bike, but that’s subject to change. The principle that it works on is some kind of electro-magnetic-gravity field… I think.

I modeled and textured it in Maya and then rendered using Mental Ray for Maya. The trickiest part was creating the lightning and making it move with the vehicle (here is a 1mb video clip of the hover craft in flight). Here is the same animation, but the sparks are limited to a closer range around the bike.

Mel scripting to the rescue! That was the most time consuming part of the whole process. I created a melscript that creates each lightning spark from the surface of the sphere to the ground. The lightning spark gets rebuilt every frame, and if the craft were standing still the lightning strikes remain attached to the same place on the sphere and the ground (wiggling about as it gets rebuilt each time). After a certain length of time the strike gets deleted and a new one gets created randomly somewhere else. I then attached a paint effects glow curve to the curve that represents the strike. Since Mental Ray doesn’t render paint effects I used After Effects to composite them together. I could have converted the paint effects into poly or nurbs surfaces, but I didn’t like the look of the final strikes.

I captured all the creation steps of the craft using Camtasia so I suppose I could create some training with it… I’m still not sure I want to do that. It contains all the steps for modeling, texturing, UV mapping, rendering and compositing both the craft and the rider. Anyway, if you would be interested in the training, drop me an . If I get enough interest I may just add narration to it and release it. If I get enough interest I may even add some bonus material that covers the making of the lightning script.

One thing that occured to me as I was creating the craft was that if I kept the poly count low I could use it in the Unity game engine. With the rider it’s around 6k triangles. I haven’t had a lot of time to play with Unity, but I’m itching to. I have an idea for a small game that would involve the craft. It would help me learn more of Unity and maybe create something fun at the same time. Just keep learning… just keep learning…


This is Bazoo’s friend, Mango. He is another character for the children’s book I’m working on. Mango is a mischievous monkey. He gets himself and his buddies into trouble sometimes because of it… Once again, I used Expression 3 to ink him in.

The obligatory disclaimer: Please note (as all things on this blog) this is copyrighted 2006, Lost Pencil Animation Studios Inc.



More on Motion Capture…

Why doesn’t full body motion capture or rotoscoping generally work for performance character animation? I mean, sure motion capture works to the extent that it captures the motion of a person and then maps it to the skeleton of a 3d character. Rotoscoping works in that you ‘capture’ the motion of a person and transfer it similarily to a 2d character. It works. But why doesn’t it always ‘work’ and why, when it comes to acting or character performance, does it often fall flat?

The only places I’ve seen motion capture actually work and not fall flat is in realistic game animation, or as background characters in realistic films (like a wide angle shot of the Titanic with virtual people walking around on the deck) or even stunt doubles where the actor is temporarily replaced by a virtual clone. So there are obvious places it works. But why doesn’t it work in films like Final Fantasy, Monster House, or Polar Express?

It seems to me that motion capture only works as long as you don’t have a character trying to portray emotions. In a game when you are an army recruit following your buddies into battle, you don’t see much acting going on – it’s purely motion (get from point A to B and be hunched down). If a soldier gets hit, he goes down realistically. If a soldier is running, he runs realistically. If a soldier is trying to evade and hide, it looks real. But the minute he turns around and tries to tell you he’s scared or to keep your head down… it all falls apart.

Consider the emotional moments in the recent ‘Monster House’ film. There were these intense moments of emotional outburst from the characters and it didn’t feel convincing? Why? It felt… wrong, out of place and toned down – like something was holding the characters back.

I think that there are a couple of factors to consider:

1) Are the characters perceived to be realistic or toony?
2) Is the character’s motion meant to portray simple motion or communicate thought and emotion?

If you try to put motion capture onto a realistic character and the character is doing simple movements (i.e. not trying to communicate they are thinking or emoting), then it seems to work. Any other combination fails. So if you try to put motion capture onto a toony character it doesn’t work. If you try use motion captured acting to convey emotion (or convince us the character is thinking) it will fail.

Here are the permutations:

(a). Basic motion mocap on a realistic character
(b). Basic motion mocap on a toony character
(c). Acting mocap on a realistic character
(d). Acting mocap on a toony character

(b), (c) and (d) all seem to fail. I think the people at Pendulum Studios (the one’s who created the Mark Antony clip and the facial motion capture system) were trying to make (c) work. But as far as I can tell, they didn’t succeed. There is still something missing – be it in the eyes, or facial expression and even the body language. It all seemed… well, mechanical and dead. Let’s pretend that they had a system that captured all the nuances of facial expressions and could map that directly to a special facial/body rig… would it actually work? Well, I guess it would. Wouldn’t this be equivalent to filming a real actor performing their craft?

Seeing an actor in a movie is obviously not the same as seeing a person …err… in person. But visually they are close enough that when we watch a movie we suspend our disbelief and accept the image before us as a real person. I suspect a perfect motion capture system would do the same thing. But a perfect motion capture system already exists… it’s called film. So what’s the point of creating a motion capture system that exactly mimics reality so that you can have a virtual actor? I supose one reason is virtually bringing someone famous back to life. But why do they have to look exactly like the historical figure? We’re more than happy to have an actor be a substitute for a past president in a film. I guess I don’t see the point of pushing motion capture technology to that point. Personally I think people are doing it… well, just because they can. But it’s not doing animation or the film industry any favors.

When any kind of motion capture is placed on a toony character it just does not work at all. Why? I think it is because of our expectations of how a toony character moves. Animation is all about finding the emphasis of a motion. What part of the motion do we want to exaggerate? Then go about doing that. It’s a distilling of motion to the parts we want our audience to focus on. Motion capture muddies the water. It shows us all the nuances when we don’t want to see them. So mocap for toons is out. That’s just talking about regular kinds of motion, not acting. If it doesn’t work for regular motion it certainly won’t work for acting motion.

Glen Keane Lecture

If you haven’t seen the post on CGChar regarding the lecture that Glen Keane put on for Calarts, then you might want to check it out (at the time of this writing there were 13 parts – 5 minutes or so a piece). The quality is poor, but I think it’s still worth watching.

Related Links