3D printing and human skill

This National Geographic video about 3D printing exemplifies the worst kind of gee-whiz reporting. Just scan a crescent wrench, print it, and bingo, you’ve copied a real tool with moving parts!

Not.

A commenter notes differences between the copy and the original and concludes:

If the real wrench was simply scanned, this would not have happened. A human has built the design data.

3d printing is cool, why do they feel they have to lie about the input method?

The input method is, of course, 3D CAD. From the product brochure for the printer:

Z Corp.’s 3D printing technology leverages 3D source data, which often takes the form of computer-aided design (CAD) models.

Gee-whiz reporting insults our intelligence and trivializes its subject matter. It’s fun to imagine a magic replicator, but it’s more interesting to know about the human/computer interaction that makes real replication possible.

Once I wrote a review of a dozen 3D CAD programs for BYTE Magazine. The benchmark was a model that I commissioned an architect to design. We called it the BYTE Pantheon and it looked like this:

My job was to construct that model in each of the dozen CAD programs. It was hard! That was partly because I had no prior experience with CAD software. But it was also because each program had its own way of using 2D gestures to manipulate 3D objects. That was my main takeaway from the project. There wasn’t (and I think still isn’t) a standard suite of gestures. Even if there were, and even (I suspect) when you can use 3D gestures, it would still be hard because you are making a precise description of a complex object. Wikipedia calls CAD an “industrial art” for good reason. Models with the same functional qualities can differ in terms of style and sophistication. Those differences come into play when the model is shared and modified. Or so I imagine, anyway. I’m not a 3D modeler but I understand 3D modeling to be a process akin to programming.

A friend of mine, Gary Spykman, describes himself as a designer, furniture maker, and artisan. He has also become a 3D modeler, and uses SketchUp to explore his designs and render them for clients. A couple of years ago, Gary designed and built what he calls his cabana. Here it is as designed in SketchUp, and as built in Gary’s back yard.

When I saw what Gary was doing in SketchUp I was inspired to try using the program for some simple needs of my own — to visualize my geodesic tomato suspension dome, and more ambitiously to visualize a remodeling of our kitchen. And you know what? It was just as hard as I remembered! If you do this stuff for a living, as Gary does, then it becomes second nature. But if you only do it occasionally, like me, you’ll be impressed every time with the level of skill required to precisely describe an object or a scene.

Like the commenter on YouTube I have to ask: why lie about this? National Geographic’s gee-whiz reporting doesn’t just fail to inform. It also fails to celebrate the synergy between computational power and human skill that makes 3D modeling so fascinating.

Posted in .

22 thoughts on “3D printing and human skill

  1. Even if they did actually use a 3D scanner as part of the process (something that’s becoming more and more common in 3D workflows), your point is still valid and quite important. Working with 3D scan data requires a large amount of finesse, smoothing, etc. especially when you want to separate the components into moving parts which it sounds like is happening here. Also since these things aren’t standardized yet anyone doing effective work with a 3D scanner probably has created their own workflows for smoothing and simplifying the data for fabrication.

    It’s like saying that Photoshop magically makes everyone a great photographer just because it can load and alter photographs. There’s a lot of people with an incredible amount of artistry and craft skills in that medium.

  2. So incredibly true, Jon.

    To extend your example, the materials science behind creating a wrench (and the metallurgical implications of the processes which form it into a wrench) are far more complex than “creating something that looks exactly like a wrench”.

    I feel like the whole maker movement is being sadly overhyped (sad because I think it’s a fantastically powerful trend towards grassroots innovation) as the “future of manufacturing”, which it is not, at least not for a long time. Wired comes to mind as one of the major offenders.

    I also share your frustration with the various 3D drawing tools out there. You still need to be immersed in that world to use them effectively for anything non-trivial. Even if you can draw something, there’s this assumption that somehow it can magically be converted to gcode to make the part. Bzzzzt. Wrong answer. Perhaps for uber trivial stuff, but not for any reasonably complex item (or one that might actually require more than one tool).

  3. One thing that helps with 3D manipulation is a 3D mouse such as the 3Dconnexion SpaceNavigator (which I use) or similar models from Logitech and other manufacturers. There’s still some learning curve, but it makes the manipulation much more direct.

  4. Jon, et al, I’m very surprised by your comment given the context of the National Geographic show. While you are correct in noting the variances between the scanned and printed wrenches, The objective and message of that particular portion of the video was to demonstrate how easy it is to make changes to a scanned part using 3D software (we were changing the color of the part at the time). Indeed I’m sure you all know that this is the most common way that engineers work with scanned parts – get it into 3D software first: then stretch this, add that, print and see if you’re satisfied with the results – a basic iterative design process. We are strong proponents of iterative design because that process produces better results. Even if no changes were made to the basic structure of the tool, it is very common for engineers to modify a scanned file, for example, to complete the internal workings of a moving part that might not be visible to the scanner. Contrary to your claim, It’s not deception for marketing purposes, just normal processes very familiar to users of all 3D scanners. Obtaining a near-exact replica of an object is entirely possible even though that was not shown in the video. For example, our ZScanners have an XY accuracy ranging from up to 40 microns for our high-end scanner to up to 80 microns for our entry-level scanner. The resolution ranges from .050 mm in XYZ for our high-end scanner to .1 mm in Z for our entry-level scanner. In fact, our scanner customers are using our scanners for inspection applications where accuracy is mission critical, as well as reverse engineering and other applications (see Mackay Consolidated inspection case study: http://www.zcorp.com/en/Company/Customers/Case-Studies/Mackay-Consolidated-Industries/spage.aspx).

  5. The complaint by the YouTube commenter, with which I agree, is not that there were differences between the scanned and printed part. Nor that ZCorp was in any way deceptive. My beef is with the videographer who made choices that imply an unrealistic degree of automation.


    it is very common for engineers to modify a scanned file, for example, to complete the internal workings of a moving part that might not be visible to the scanner

    Exactly. And correct me if I’m wrong, but I’m sure that even when the scanner can see everything the model will typically require human attention. That’s not a bad thing. On the contrary, the iterative computer/human interaction is a wonderful thing. What irks me is that the videographer chose to suppress it.

  6. You are correct in that some human attention would be needed. The National Geographic folks were not at all trying to be deceptive; they were trying to convey the process in a way that would be understandable to the bulk of their audience and within the timeframe they had available for the segment.

  7. I do understand the constraints. But in the time allotted I could have produced a version of that video that would have told a realistic version of the story that would be no less understandable or impressive — and arguably more so.

    If/when ZCorp creates something that teaches people how the process really works, I would love to see it.

  8. Hi Jon,
    I won’t speak for the National Geographic folks. A highly reputable, award-winning media entity, they know how to create educational and entertaining shows for their unique audience. As far as Z Corp-produced ZScanner materials describing workflow, we have several already produced:
    http://www.zcorp.com/en/forward/events.aspx?c=15Webcast “Can a Car Maker Really Deliver Mass Customization” (4th from the bottom) and How A Leading Aftermarket Auto Parts Manufacturer Cut Product Development Time By 40-Percent (bottom of page)
    Demonstration of ZScanner 800 http://www.youtube.com/watch?v=M3zkUSwEBV4
    ZScanner 700/800 video: http://www.zcorp.com/en/Products/3D-Scanners/ZScannerandtrade-700/spage.aspx (click on “video” in right column)

    If you need additional information, please don’t hesitate to let me know.

    Best,

    Julie

  9. If you need additional information, please don’t hesitate to let me know.

    Well, OK. I watched both of the videos you linked to. The first concludes with (at around 6:12) “basically taking that raw point scan data and converting it automatically into an stl file, or a polygon file, or a triangulated mesh.”

    The second says, “once that’s done, we’re going to bring it into the 3D software where we’re going to actually process it and turn it into good-quality CAD data which would be a feature-based solid model or a surface model.”

    We don’t see either of these processes — that is, either the conversion to polygons, or the conversion of that to a surface or solid model. Can we?

    1. Hi Jon-

      Joe Titlow from Z Corp here…

      Lots of good points raised and I think a great discussion. You’re right that the scanning process isn’t as ‘push button’ as we’d like it to be and there are tools specifically made to bridge the gap to a fully-featured parametric CAD model. The reality is that those tools are not Z Corp products, so we don’t spend a lot of time promoting them.

      To convert scan data to a parametric solid model (it does come out of our ZScan software as polygons) we typically recommend looking at the products from Rapidform and Geomagic. Both wonderful tools to do this job. You asked to see that step in the process and searching on YouTube will result in several videos like this one:

      http://www.youtube.com/watch?v=QdLefra8xbo

      I hope that helps…

      Joe

  10. Jon, The processes described in the two Webcasts are those used by those two customers, described by them in their own words. I linked to more than 2 videos, perhaps the others provide you with what you’re after. If not, I’m afraid at this point, that’s all I have. If you’re looking for a more detailed, technical description, we can certainly provide that as well. We provide videos based on our audiences. When someone needs something more detailed and technical, we generally provide them with a one-on-one, live demonstration. Please let me know if you’d like us to arrange such a demo for you.

  11. Addressing your comments about SketchUp! I wholeheartedly agree. I too used it for a house remodel/addition. It took a while to orient myself to the modes (and it is very modal, meaning you have to know whether you are navigating or editing at any given time). The second most important thing was making sure the scale was accurate. I wanted something we could throw together quick and dirty layouts to then hand off to a real architect/engineer to blue print and hand to the general contractor. That we accomplished, but it was a steep learning curve. And since that one project I haven’t launched the program on my computer except to look at those old models. To do it well, you have to practice, over and over.

  12. You said that the reasons for CAD being so hard for a generalist to master was because each program had its own way of using 2D gestures to manipulate 3D objects

    Michael Geary commented that one thing that helps with 3D manipulation is a 3D mouse, [that] There’s still some learning curve, but it makes the manipulation much more direct.

    These complexities are a major barrier for the many of us who do not do this stuff for a living into accesing 3D technologies such as printing. Scanning is not the easy answer either. I’d very much like to get comments on our 3D haptic enabled sketch/modelling software – information/videos here http://anarkik3d.co.uk (currently undergoing updating) as I am putting a crowd funding proposal together and feedback would be immensly valuable.

  13. Thanks Julie.

    I didn’t and don’t think ZCorp was faking anything. I did and do think that the NatGeo video oversimplified in a way that misled a lot of people about what’s actually possible, and why, and how it requires synergy among computers, software, and people.

    I’m grateful to Joe Titlow for acknowledging that in his comment above.

  14. Great article Jon. I have done some thinking about user interface of 3D modeling tools, (because I am developing one myself [1]) so I was glad to read your thoughts on the matter. If we permit ourselves to ignore the constraints of currently available Input and Output means for the computing devices, then we can come up with the ideal 3D modeling interface – A holographic display (3D output) and a gesture recognizer in 3D space, something like Kinect (3D input).

    However, given the constraints of a 2D flat screen monitor for output and a keyboard-mouse for input, there is a fundamental limit on how intuitive you can make the user interface for 3D modeling. The games and movies that deal with the 3D world, are largely concerned about 3D output. Due to the popularity of these applications, the computing industry has optimized the 3D output by a large amount. They put that functionality into hardware (graphics cards) to make it faster and realtime. But the input requirements for these applications are rudimentary, therefore keyboard/mouse or a joystick did the job. CAD applications on the other hand require precise input in 3D space. But since our computing devices are not developed for 3D input, it becomes a difficult task. The projection matrix can convert from 3D to 2D easily, but the conversion in other direction is not trivial.

    In our 3D modeling tool (3DTin [1]), we are therefore trying to solve this problem in a different manner. In 3DTin, users can manipulate the 3D objects, in a very intuitive interface. They can add new instances of geometry, remove them, move them around, rotate or flip. But that’s it. We are limiting the operations offered through GUI interface to only these handful of functions. Therefore users don’t have to RTFM to do the basic tasks of 3D modeling. We are offering advanced modeling features too, but instead of dumbing them down and offering them through menus and toolbars, we are offering them through a python based scripting environment [2]. We think that, this approach separates the CAD functionality in the right way. The operations that can be easily learnt through GUI are offered as such, while the operations that give you expert control require you to learn a scripting language (albeit with an intuitive API) . You can find more explanation of this approach in [3] and [4].

    [1] http://www.3dtin.com
    [2] http://jayesh3.github.com/cadmium/
    [3] http://www.shapeways.com/blog/archives/882-Interview-with-Jayesh-Salvi-of-3DTin-Cadmium.html
    [4] http://blog.3dtin.com/cadmium-solid-modelling-library-python-openso

  15. Jon,

    If you’ll recall, I’m the one you hired to create that pantheon model. It’s a real treat to see it out in the limelight again after all these years.

    Your point on the design tools still being too hard is still quite relevant. Give me a shout when have time to chat about it.

    Warm regards,

    Brad Holtz
    President & CEO
    Cyon Research Corporation

  16. Pingback: 3d art
  17. Why would it matter whether that wrench was scanned automatically or not?

    The idea is that 3d printers hit consumer homes so the consumer can download models.

    No scanning required in the first place.

  18. Having read this I thought it was very enlightening.

    I appreciate you taking the time and energy to put this content together.
    I once again find myself personally spending a lot of time both reading and leaving comments.

    But so what, it was still worth it!

Leave a Reply to Julie ReeceCancel reply