Video Lecture: 3D Modeling Basics for Beginners – Techniques, AR Tips, and Intro to AI Tools

I have some exciting news! October 23rd, 2024, I was once again invited to guest lecture at CMU School of Design. I decided to follow up with a recorded version to share. In this recording, made after the original lecture session, I cover the essentials of 3D modeling with a focus on beginner-friendly techniques. You'll find practical insights into mesh modeling, workflow tips for Blender, and an introduction to preparing models for augmented reality. The full lecture video is embedded below, followed by detailed notes that offer a step-by-step breakdown of theory and techniques for anyone new to 3D design. Dive in, explore, and start building your own 3D modeling skills.

Principles of Mesh Modeling

Note on Mesh Modeling Focus—Or Why This Lecture Focused Primarily on Mesh Modeling:

Meshes are the standard 3D model type used in real-time 3D engines—like Unity, Unreal, and virtually every AAA video game title in the last 30 years, going all the way back to Quake, by id Software in 1996.

Key Principles:

  1. Use Quad Faces Whenever Possible: Design your shape faces with quads instead of triangles and ngons.
    Reason: Quads are infinitely divisible, making it easier to adjust geometry resolution as needed. Tris and Ngons are not as flexible, which can lead to undesirable artifacts and poor topology.
    3D games primarily use triangles (tris) instead of quads because triangles are the simplest polygon shape and always planar (a flat surface), making them computationally faster to render in real-time on limited hardware, which was crucial for early gaming systems underpowered computer hardware. Essentially, triangles require less processing power to calculate and display on screen compared to quads, which have more vertices and edges.
    On modern computer hardware we can get away with more complex geometry, and it's generally a better trade-off to build mesh models from quads. That is, the computational costs are vastly outweighed by the benefits of evenly divisible face geometry and more manageable topology. Lastly, quads are easily converted into tris, by producing diagonal edges between the four vertices.

  2. Work from the Lowest Possible Polygon Count: Always start with the lowest polygon count (i.e., resolution) for your model. You can increase resolution later with subdivision modifiers, but it's not as easy to reduce the resolution later.
    Reason: Editing a high-resolution mesh is more difficult than working with a low-resolution one, which offers greater control and flexibility. It also takes much more processing power and memory, which will slow down Blender and increase the risk of crashes.

  3. Keep Base Shapes Simple: Keep your base shapes as simple as possible. When adding details, create those elements as separate objects. When you hit a milestone, consider duplicating a model or a collection of models to a new instance for further refinement.
    Reason: This approach makes 3D modeling more manageable, allowing for easier adjustments and maintaining clean geometry.

  4. Use Modifiers and Non-Destructive Editing Whenever Practical: Designing a symmetrical shape? Cut it in half and use a Mirror Modifier to cut your editing time in half. Keep in mind that the most complex designs can ultimately be derived from very basic shapes: Spheres, Cones, Toruses, and Cubes.

  5. Work From Reference Images, Even If Just A Few Basic Sketches: Press Shift + A to open the Add menu. Navigate to Image > Reference. Select the image file you want to use from your computer. The reference image will be added to your 3D Viewport, where you can position, scale, and rotate it as needed for your modeling task.

  6. Build The Overall Form First, and Then Separate into Smaller Objects: This will ensure that your designs are cohesive and edges are properly aligned. When you're ready to divide into separate objects, duplicate the objects into a new Collection.

  7. Experiment, Tinker, Explore, and Start Over: You're unlikely to get the design right on the first attempt. It's often necessary to work through the problem, and then start over from scratch once you've had enough time to explore the form. Reason: Your second draft will almost certainly be better than the first.

Blender Quality of Life Recommendations:

  1. Save Your Project Files Early and Often: Use Blender's "Save Incremental" (⌥+⌘+S) (Option + Command + S) to manage version control. Doing this will give you the freedom to fearlessly tinker and explore (as mentioned in the previous point) before settling on a final design.

  2. Crank Up The Number of Undo Steps: Open Edit from the top menu. Select Preferences to open the Blender Preferences window. In the Preferences window, click on the System tab. Scroll down to find theUndo Steps setting.

    Increase the value (the default is 32). If you have enough system memory, set it to 256 for more flexibility in undoing actions. Close the Preferences window to save your changes.

  3. Consider Using A Material Library: Blender has a basic built-in material library, but it's not very useful. Look into large libraries, such as PBR Material Asset Library + OneClick Add-on for Blender (https://shardsofred.gumroad.com/l/CfOnY). Creative Commons License (CC0) materials can be used for basically anything, and will save you time.

  4. Remember to Perform a UV Unwrap on Your Model Geometry for Best Results When Texturing: The most realistic textures in the world won't help you if your model doesn't have good UV Mapping. Remember the chocolate Santa Claus example? Proper wrapping is essential for creating realism with your models. https://docs.blender.org/manual/en/latest/ modeling/meshes/uv/applying_image.html

  5. Recommended Extensions and Add-ons:

    • VDM Brush Baker: Helps you create and bake Vector Displacement Maps directly in Blender.

    • Bool Tool: Boolean operations for complex shape creation.

    • Node Wrangler: Enhances node editing management.

    • Rigify: Automated rigging solution for character animation.

    • Loop Tools: Useful for organic modeling (with some bugs appearing

      in Blender 4.2—be sure to keep this add-on updated!).

  6. Other Useful Add-ons: Auto Mirror, F2, Extra Mesh/Curve Objects, Extra

    Grease Pencil Tools, Copy Attributes Menu, and MeasureIt.

    Bonus: Need furniture? Most of IKEA's catalog of products have 3D models available. Search for "IKEA" under Extensions and you can easily search and import 3D models into your scenes.
    Note: Ensure 'Allow Online Access' is enabled in Blender's System Preferences for add-on updates.

Create Augmented Reality Experiences for iOS with Xcode Developer Tools, Reality Composer, and USDZ File Format

Once you've finalized your form, added necessary details, and applied your materials, you should be ready to export your model.

Step-by-Step Instructions for Preparing 3D Assets for Export to USDZ:

  1. Duplicate Your 3D Assets and Collections: Create a new instance of your 3D assets specifically for export.

  2. Apply All Transforms: Hit A to select all visible objects, then press ⌘ + A (Command + A) and select All Transforms to apply.

  3. Apply All Modifiers: Apply all modifiers in the same order they were added to each model—except for subdivision, as tessellation data can (usually) be included without applying it directly to the models.

  4. Join All Components: Hit A to select all visible objects, then press ⌘ + J (Command + J) to perform a join operation.

  5. Export the File: Go to File > Export > Universal Scene Description (usd*).

  6. Configure Export Settings:

    • Include: Check Visible Only and Selected Only.

    • Blender Data: Select Custom Data.

    • Namespace: Use the default setting (UserProperties).

    • Blender Names: Enable this option.

    • File References: Set to Relative Path.

    • Convert Orientation:

      •  Z = Forward Axis

      • Y = Up Axis

        Note: Many other 3D tools, including Xcode's tools, interpret 3D models with a different axis orientation than Blender. If you don't apply this conversion, you'll find your model improperly rotated following import. If this happens to you, double-check these settings.

    • Use Settings for Render: Enable this option.

    • Object Types: Select Mesh, Volumes, Curves.

    • Geometry: Enable UV Maps, Rename UV Maps, Normals.

    • Subdivision: Set to Best Match.

    •  Rigging: Enable Armatures (if you have rigged and animated your

      model).

    • Materials: Select USD Preview Surface Network and Export Textures.

    • USDZ Texture Downsampling: Set to 1024px or up to 2048px (the

      largest size acceptable for iOS QuickLook).

  7. Update File Extension: Change the export file name extension

    from .usdc to .usdz.

  8. If no issues are encountered after export, you should be able to view your model in Augmented Reality on any iOS device. Open your exported file from iCloud, send it as an email, text, or AirDrop to another device to view.

Setting Up Xcode and Reality Composer:

The latest version of Xcode doesn't include Reality Composer, as Apple has shifted their focus to the Vision Pro. You can still access the Augmented Reality Tools for iOS devices, with some additional steps.

Step-by-Step Instructions:

  1. Download the Latest Version of Xcode 14: Download from the provided

    link: https://developer.apple.com/download/all/

    NOTE: You'll need to create an Apple Developer Account (it's free) to access the above link, or using this direct link: https://download.developer.apple.com/Developer_Tools/Xcode_14.3.1/Xcode_14.3.1.xip

  2. Extract and Rename The Older Version of Xcode: Rename Xcode.app to Xcode14.app and place it in your Applications folder.

  3. Open Terminal on Your Mac.

  4. Open the Applications Folder in Finder.

  5. Drag the Xcode14 App into Terminal: This will automatically add its path.

  6. Add to the Path: Next to the path, add: /Contents/MacOS/Xcode.

  7. Full Command Example: The command will look like:

    /Applications/Xcode14.app/Contents/MacOS/Xcode

  8. Run the Command: Press Enter to run the command.

  9. You should now have access to Reality Composer in Xcode. Click on the Xcode menu on the task bar, then click Open Developer Tool, and then click on Reality Composer.

    Learn more about using Reality Composer here: https://developer.apple.com/documentation/realitykit/realitykit-reality-composer
    Learn more about Apple Reality Kit and ARKit here: https://developer.apple.com/augmented-reality/tools/

BONUS: Generative AI and 3D

Tripo AI (https://www.tripo3d.ai/app) is an advanced generative AI tool that allows for both text-to-3D and image-to-3D model generation. This tool offers users an intuitive way to create complex 3D assets with minimal manual input, simply by describing what they need or providing a reference image.

Key features:

  • Text-to-3D and Image-to-3D Conversion: Users can input a detailed description or upload an image, and within seconds, the AI generates a draft model ready for refinement.

  • Prompt: "A pineapple-hedgehog with spiky fruit armor and leafy quills."

    https://tripo3d.ai/preview?share=9a57357e-6262-469c-afb1-c7af74d92c93

  • Prompt: "A 1980s sci-fi robot stylized as a Nintendo NES product."

    https://tripo3d.ai/preview?share=a08a55cd-9e66-48a5-be3d-85a26160e461

  • High-Speed Generation: Tripo’s AI processes are optimized for efficiency, allowing users to generate detailed models in a matter of seconds, ideal for prototyping or quick visualizations.

  • Customization Tools: After generating a model, users can adjust topology for increased details, or apply stylization, such as voxels.

  • Seamless Integration: Tripo3D supports a variety of export formats like .usdz .obj and .fbx, making it easy to import models into Blender and other software for further editing.

  • Generate full texture maps with PBRs: includes generation of PBR textures, adding even greater details beyond the geometry.

  • Automatic rigging and basic animations: Applies a basic animation rig to generated models and simple animations, such as a running character, to the model geometry.

Downsides:

  • Imprecise generation: just like AI image generators, results are unpredictable and often wrong.

  • Costs: Using this tool will require a membership plan, and has limited monthly credits, which limits usage.

CREDITS:

Thanks to all of these wonderful educators and content creators who continue to inform and inspire me throughout my 3D journey. Preparing this lecture required lots of time and consideration for how to condense what I’ve learned over the last five years into something I could demonstrate in under 2 hours. This wasn’t easy, but I had many fantastic resources to pull from.
If I’ve left anyone out, please leave a comment so I can include them here:

YouTube Creators:

Reference Files:

Robot model created with Tripo AI

Robot model with corrected orientation

Reality Composer demo file

Interactive USDZ demo file

Note: Due to a bug, the robot walking animation doesn’t playback in QuickLook AR for iOS.

Scripting in Blender

For folks who use blender to create animated characters for Unreal Engine, you might find the process of creating a rig fairly tedious. You may also discover that some constraints in Blender are causing problems with your exported skeleton in Unreal.

If you need to sanitize your bone constraints, this work can be labor intensive if done manually.

Here’s a simple script that will automate that process. (shoutout to manuDesk for providing this)

import bpy for bone in bpy.context.selected_pose_bones: for c in bone.constraints: if 'Constraint Name' in c.name: bone.constraints.remove(c) # Remove constraint

Replace “Constraint Name” with the name of the constraint you wish to remove from your rig, and then run the script. This could save you hours of work.

Suppose you need to copy constraints between a control rig and a deform rig. CGDrive has provided an entire video tutorial on this process. Below, is the code used in their script. Watch the video for a basic workflow, as there is some preparation necessary for this script to work.

import bpy sel = bpy.context.selected_pose_bones for bone in sel: name = bone.name bpy.context.object.pose.bones[name].constraints["Copy Transforms"].subtarget = name

Summer Internship: mid-July Update

It’s been nearly a full month since my last update, so what have I been working on?

The last few weeks have been incredibly busy. I’ve had several interviews and job application submissions. My gallery and design pages received some helpful updates. I’ve successfully built a standardized data format for a visualization project and introduced some of my fellow interns to the history and purpose of data visualization, and its role in matters of social justice and equality. I’ve also continued to make leaps and bounds on my 3D character work.

I’ve resolved the primary issue that was plaguing me early on in this project: mesh topology.

To be efficient for a game engine, it’s important to resolve mesh geometry with minimal faces. Unreal Engine 5 supports a new level-of-detail automation called “nanite,” which allows 3D modelers to create elements with a virtually unlimited number of faces (geometric detail). This does not seem to work with animated characters, however — if you know a way around this, please send me a message!

A “face,” in this context refers to a plane that has 3 (tris) or more points. Ideally, the geometry should have faces with 4 points (quads). While modern engines and graphics hardware have gotten significantly more advanced since the early days of 3D gaming, it’s nevertheless still important to avoid models with excessive polycounts (the sum total of faces in a model).


The challenge with my model is that it requires several tangent geometries that intersect with a central body. Image if you were modeling Tree Character, and you need to cover the various branches with leaves, blossoms, or pine needles. Blender can easily generate these elements using weighted vertex mapping and particle systems (hair). The difficulty, however, comes from trying to export this data for use in Unreal Engine.


Unreal Engine supports FBX models for import, but FBX models do not support shape keys. By default, Blender applies Particle System modifier as a shape key. You can, however convert particle instances into their own individual body/mesh component in Blender prior to export. This works, Unreal will interpret the model as being several individual bodies with shared coordinates. When exporting this type of arrangement, animation rigs cannot deform the free meshes — your leaves (or whatever) will stay in place while the rest of the tree animates.

You may already be thinking to yourself, “why not just combine the meshes?” And since the beginning of this project, I anticipated that this might be necessary at some point. The challenge then is how to create a combined mesh that is free of visual artifacts. Additionally, the boolean modifier in Blender only allows you to select a single target body to combine (union).

Thankfully, there is a Blender add-on for this last issue: BoolTool.

Suppose you want to combine all of your individual leaf meshes to the main tree body, but you know it would be insanely time consuming to go through the tedious sequence of selecting each leaf, one at a time, applying the modifier, creating a new modifier, and repeating that x700.

You could try writing a script, to automate this process, but you’d need to write it in python, and that would be a can of worms unto itself. You can already imagine the mission creep setting in…

BoolTool makes this process dead simple. You just select the mesh body you want everything to join with (active object), and then select all of those other meshes. Go to the BoolTool menu and select “union.” You may think that Blender has crashed, but give your computer a break. Grab a cup of coffee, meditate, embrace the here-and-now… *BING* [Your mesh combine operation is complete!]

BlenderBool.png

Assuming you created an efficient mesh, primarily out of quad faces, with intersecting meshes containing the same number of vertices, then you should have a single mesh geometry that is ready for rigging and animation.

I’ve tried dozens of other approaches so far, and this method produces the best results all-around: faster, easier to execute, high fidelity, and with the lowest poly count.

I’m still working out the kinks on my current model, and the animation sequences will be a next-level creative challenge for me (I have a lot of inspiration and ambition on this front), but I feel confident in the current direction, and early tests have been quite promising.

I’m also starting to make simple prototypes for augmented reality, and will add some notes about that later this month.

Kinetic-friendly spoon project Mega Post

That’s a wrap! It’s certainly been an interesting semester, but now I am ready to put it behind me. Reflecting on the spoon project, I have some final thoughts and observations. First, I want to thank the fine folks at CMU School of Design. From the amazing and hardworking faculty and graduate student cohort, I have had nothing less than inspiration and encouragement throughout this entire process, despite the obvious challenges of working remotely.

Rendering of sixth and final (?) spoon design. I pulled the kitchen design (Pierre Gilles) and bowl (Damogran Labs) from GrabCad.com. The spoon and coffee mug are mine.

Rendering of sixth and final (?) spoon design. I pulled the kitchen design (Pierre Gilles) and bowl (Damogran Labs) from GrabCad.com. The spoon and coffee mug are mine.

This project was divided into two parts: the first part focused on exploring different ways of prototyping and making. This was described to me as an informal way of A/B Testing for methods. The second part involved the deliberate iteration of prototypes through user testing — a challenge in the context of a global pandemic and social distancing. To make the most meaningful design choices possible given limited resources, I decided to leverage the power of physical simulation to supplement the making of physical prototypes.

There are a variety of 3D software tools that offer some degree of physical simulation. For this project, I selected Maxon Cinema 4D R20 (Educational License) and Blender as my two ways of making. I chose these because I already am familiar with Cinema 4D and understand know how to manage a workflow in that context, because Blender is open source and free for anyone to use, and both programs work under MacOS and Windows environments (my rendering workstation is a Hackintosh with multiple operating systems, which grants the flexibility to overcome certain technical limitations). My initial experiments with Cinema 4D were… not great.

My very first (and failed) attempt to simulate fluids in Cinema 4D. Carnegie Mellon University School of Design Prototyping for Interaction Spring 2020

As you can see, there are “physics” happening here, but they are not anything close to the physics of the real world. This is not “real world” physics, this is Asshole Physics:

Zachary "Spokker Jones" Gutierrez and I came up with the term "Asshole Physics" when we were discussing the game and the physics models it employed. Basically there's a lot of crap you can knock over and kick around, including dead bodies, buckets, cans, and little sections of drywall which are standing around in the middle of rooms for no obvious reason. Zachary casually mentioned, "I have made it a point to knock over every fucking thing in that game. I am living out my fantasies of being a giant asshole," and I responded by stealing his "asshole" comment and claiming that I made it up. Thus "Asshole Physics" was born.

Without more sophisticated plugins to simulate fluid, Cinema 4D R20 is only “out of the box” capable of non-newtonian semisolids. I can make stuff bump around and “squish.” I can have a 3D character micturating on the side of a building. I can create the appearance and illusion of something like a fluid, but with such restrictions, I could not realistically evaluate my spoon designs. I explored my options and found that Next Limit’s RealFlow plugin would meet my basic needs. Best of all, they offer a free 30-day trial! My initial excitement quickly waned after the plugin failed to install and activate on my system…

(This email chain is long and covers a week of back and forth with customer service. I am including the entire conversation as a way to recreate my experience. While this may not directly relate to the scope of this project, I still believe that there is value in documenting the unexpected problems that crop up when trying to do something new.)

Mail_02.png
Mail_03.png
Mail_04.png
Mail_05.png
Mail_06.png
Mail_07.png
Mail_08.png
Mail_09.png

It took a week to finally get everything sorted with the demo. During that time, I began to explore option B: Blender.

Blender is a free, powerful, open source 3D creation tool. Best of all, it includes the mantaflow fluid simulation engine (since version 2.8). I have worked with Cinema 4D on other projects, and have become fairly comfortable with the interface. Given my experience with Fusion 360, Inventor, and C4D, I knew that I would need to overcome a learning curve before I could use this software to meet my needs for this project. Fortunately, I was able to find a spectacular tutorial series for beginners.

If you want to read more about my experience with the tutorial, click here.

This tutorial was ideal because it involved exercises that helped me learn how to use the interface, and covered several different workflows. I was really impressed with Blender’s node-based material system and procedural textures. You can work stri…

This tutorial was ideal because it involved exercises that helped me learn how to use the interface, and covered several different workflows. I was really impressed with Blender’s node-based material system and procedural textures. You can work strictly with parametric modeling, or you can discretely modify mesh geometry to create highly organic and imperfect forms. I’m excited to work with Blender on future projects. It’s a very exciting time to be working in 3D.

While working through these tutorials, I began sketching and working in Fusion 360 to craft my first spoon designs for part 2 of this project. You can read more about this experience here.

Takeaways from Part 1

I really appreciated the responsiveness from the team at Next Limit. Clearly there are problems with the software’s implementation of their product’s copy protection. This is an all-too-common problem in the world of software. Programmers gotta eat just like everybody else, and we certainly should make sure that the talented and hardworking folks behind the code are able to put food on their table at the end of the day. Piracy can deprive a small business of the necessary revenue to keep the lights on, so I am absolutely sympathetic to this reality and what risks are involved when you release your software for demo purposes. Getting people to pay for something that they can easily get for free is a challenging proposition. At the same time, you cannot realistically expect to get customers to pay for software if they cannot try it first. Ultimately, this one week of back and forth with customer support was a critical loss. I never completed a side-by-side comparison of fluid simulations. While I did eventually succeed at installing and using RealFlow to do fluid simulations, (and was honestly impressed with how easy it was) I did not, however, have enough time to setup a comparable simulation to evaluate spoon designs. My trial expired about a week ago, and I see this aspect of the project as a lost opportunity. If Next Limit applied similar licensing practices as Maxon (verify it through .edu email address), they could offer an educational package of their RealFlow plugin.

Blender really came through for me. The learning curve was aggressive, but not impossible. While I found mantaflow to be a respectable and entirely capable fluid simulator, it was not without its own share of issues. I spent a lot of time making granular tweaks to improve the fidelity of my simulations, while also using the observations from my simulations to inform design decisions for my spoons in part 2 of this project.

Part 2: Design Iterations Based on User Testing

While this project required user testing and design iterations based on feedback, I decided to limit the user evaluations to address handle shape and the spoon’s overall dimensions. This was not an arbitrary decision or an excuse to focus on physical simulation of fluid dynamics (with user testing as an aside). No, this decision was based on the nature of the course from which it was assigned: Prototyping for Interaction Design. This semester I have have been focusing on designing for interaction (arguably, all designers do, at some point in their process, focus on this aspect). When thinking about the tools we use (to eat food) as a system, it is important to consider the touchpoints involved. The handle of a spoon is a non-trivial component. It can take on many forms, and naturally includes affordances. How someone holds a spoon, and how easy it is for them to use it are central to the evaluation of the design.

The iterations of design were highly generative in nature, inspired by both user evaluations and physical simulations, I maintained a homeomorphic continuity: treating the initial shape as an elastic form to be molded and reshaped to maximize performance. Knowing how a concave shape might be optimized to perform under rapid movement — I wanted to create something that would be useful, and the physical simulation of fluids facilitated a means of evaluation — is only one aspect of a more complicated interaction, and this test alone could not fully address human needs. When physical form is designed and directed to improve user interaction (and physical properties are given equal consideration), it is possible to create a truly useful tool. I realize that this is a very technical description, but it is easier to understand when properly visualized. I have rendered a compilation sequence to show how this spoon shape evolved to its final(?) form (I am still considering a physical prototyping stage for this project over the summer).

A sequence of fluid dynamics tests designed to evaluate fluid retention of concave forms. Carnegie Mellon University, School of Design, Prototyping for Interaction, Spring 2020.

Toward the latter half of this sequence, you will notice a change in colors (for both the liquids and spoons). I decided to differentiate the final rendering sequences as these were based on user evaluations. The colors chose for these final sequences are based on the color tags used for the user test:

These printouts are derived from DXF vector images exported from Fusion 360. The designs shown are oldest (top) to newest (bottom). The fifth design (blue) is rendered with a blue body and green liquid.

These printouts are derived from DXF vector images exported from Fusion 360. The designs shown are oldest (top) to newest (bottom). The fifth design (blue) is rendered with a blue body and green liquid.

I printed and mailed the paper prototype to a potential user suffering from ongoing hand tremors (my partner’s mother). I sent this without written instructions. Instead, I only provided different color tags to facilitate feedback. My user let me know that the red spoon handle was in the “Goldilocks” zone in terms of size and shape: not too big, not too small, not too curvy, not too straight. Using this feedback I constructed the sixth and final (?) form — see the first image of this post.

The user test included a direct side-by-side comparison with existing dinnerware.

The user test included a direct side-by-side comparison with existing dinnerware.

Before developing these simplified paper prototypes, I also experimented with ways of making more three-dimensional forms that could be sent in the mail. While this novel approach showed some potential, I was concerned with how user error might complicate or (even worse) bias feedback. Still, these paper prototypes helped me to better understand and interpret the scale of my 3D models.

PaperPrototype_01.jpg

Final Thoughts

This project still feels somewhat incomplete. Perhaps this is because the generative design process itself can always demand further iteration, or maybe it is because I have not yet created a physical prototype that can actually be tested as an eating instrument. Maybe it is only because there were still a few “rogue droplets” (grrrrrr) that I simply could not keep contained with the completion of my sixth iteration. Whatever the net effect might be from these various shortcomings, I am pleased with the learning opportunities that were presented throughout this exploration of design.

Were I to continue with this process, the next steps would be to 3D print the latest shape using a food-safe material (there are a few third-party vendors that offer this service). I would then ship that latest design for further user evaluation. I believe that there are still many additional iterations necessary before I could defend having created something that satisfies the criteria I set out with this project (i.e., a spoon that overcomes the challenges of involuntary muscle movements and essential tremors).

If I were to collaborate with others, I would also want to evaluate the ecological and economic impact of such a device. How might we go about manufacturing to appropriate scale? How might additional user tests with a wider audience influence the existing form? There remains many unanswered questions and a newfound respect for the power of generative design.

Bugs in the Blender

I have continued to have luck exploring the Fluid simulations in Blender, but this process has not been without its quirks. I recently encountered a strange issue related to Particle Radius settings

Particle Radius

The radius of one liquid particle in grid cells units. This value describes how much area is covered by a particle and thus determines how much area around it can be considered as liquid. A greater radius will let particles cover more area. This will result in more grids cell being tagged as liquid instead of just being empty.

Whenever the simulation appears to leak or gain volume in an undesired, non physically accurate way it is a good idea to adjust this value. That is, when liquid seems to disappear this value needs to be increased. The inverse applies when too much liquid is being produced.

What does this look like in practice? My most recent simulation actually seems to produce fluid as the scene progresses.

Nevertheless, I was able to gain critical insights into this form and will continue to iterate new designs. This is being done in conjunction with paper prototyping. These forms are less sophisticated, but still provide valuable information about how users will experience and interact with this flatware.

SpoonPrototype.jpg

Spoonfuls of updates

This week was packed full of progress on multiple projects. I received feedback for my group’s birth control information app “MyGallery.” Our work was even featured on CMU’s Design page.

Crafting an iconographic representation for the withdrawal method was my proudest moment.

Crafting an iconographic representation for the withdrawal method was my proudest moment.

I’ve continued to explore fluid simulations with Blender. I’ve ran into some technical hurdles: Blender 2.82 uses a variety of protocols to leverage GPUs for rendering and computation. It offers an AI-driven denoiser (Optix), CUDA path tracing, and OpenCL. My MacBook Pro has an AMD Radeon Pro 5500M GPU as well as the option to plug in a Radeon Frontier Edition (first generation Vega) eGPU on Thunderbolt 3. Plenty of GPU compute power in either configuration, but there is a snag: MacOS 10.15 (Catalina) has deprecated OpenCL in favor of Metal 2+. CUDA and Optix are proprietary to nVidia GPUs. Apple hasn’t shipped a Mac with nVidia GPUs since Kepler launched (GeForce 700 series). Blender supports AMD ProRender, but I found it was terribly unstable.

I could easily slip into a tangent about how unfortunate the breakup between Apple and nVidia truly is, but I will spare you.

My current workflow involves queuing some tasks to my desktop, running Windows 10. The GPUs are dual Radeon VIIs. Unfortunately, I found that rendering on Blender is unstable when both GPUs render in parallel. No problem, since I can free up the other GPU for Folding@Home (a hobby of mine that has exploded in response to COVID-19). Who would have guessed that a global pandemic would boost a distributed computing project to exascale?

Despite these obstacles of platform compatibility, I have made significant progress on my simulation-based research. It is difficult to understate how exciting this project has been for me. For some context: the ASCI Red supercomputer (at the Sandia National Laboratories) was built in 1996, and was the fastest supercomputer in the world until 2000. It was the first computer to achieve true terascale computing (one trillion floating point operations per second). I built my first terascale computer in 2013. This was shortly after leaving my job at Intel. There was something very gratifying about building a computer with a CPU I helped manufacture. GLaDOS G4 (you can see the project here, scroll down to “Everything Else”) was built with a GeForce GTX 780 GPU and Intel Core i7 4770k overclocked to 4.5 GHz. It ran nearly silent and fit inside an up cycled Apple Power Mac G4 (microATX equivalent) case.

The ASCI Red supercomputer was designed to simulate nuclear weapons tests. Today, I am using a system roughly ten times more powerful to simulate soup spilling out of a spoon. I was inspired to approach this problem by two projects. The first was a 2013 project from Portland State University (my alma mater) to make a coffee cup for zero-gravity environments. they used drop cages and 3D printing to iterate several designs until they had a shape that held liquid. “It wasn’t needed, but it was requested.”

The other project hit me right in the heart.

The S’up Spoon is the embodiment of good design. The design was inspired by deep empathy for a user’s problem, and the solution involves as little design as possible. There are few technologies in this world that we trust enough to put in our mouths. If you can make it in this space, you can make it (almost) anywhere. During the fall semester, Moira and I visited the Carnegie Museum of Art. They had an exhibition on accessibility design, and I was brought to tears by stories of innovation and vibrant improvements to quality of life for people with disabilities. Technology, at its very best empowers people to realize their fullest potential. We can easily get lost in the exhilaration of the complex, but this impulse must not dampen our ability to appreciate the elegance of simplicity. Some problems are best solved by form. I saw many incredible solutions in that exhibition, but this spoon has really stuck with me.

My goal is not to make something better, but perhaps a little bit different. The shape of the S’up spoon is intuitive, and if we had never seen a spoon before, we might conclude that it is the better design over more traditional forms. It is however, under our current cultural context, a strange thing to behold. It looks more like a wizard’s pipe or a warrior’s horn. It is beautiful and ergonomic. I do not intend to elevate those specifications. Instead, my goal is to make a spoon that is inconspicuous while still achieving similar results for users who suffer from motor movement difficulties.

How has my first design faired under simulation?

While I can certainly see the appeal of a long hollow channel, I’ve become increasingly concerned with how this shape my be difficult to keep clean. I can imagine objects getting wedged toward the back depending on what is being consumed. I have began to work on a second iteration with a more shallow channel. Still, this first iteration does fairly well. It is managing to retain most of the 15ml (i.e., 1 tablespoon) of fluid under rapid movement.

I enjoyed this simulation so much that decided to make a rendering:

I have not yet gotten back into Cinema 4D to evaluate RealFlow. Despite the challenges regarding compatibility, I am truly impressed with how powerful this open source software has become with this latest release.

Now that I have established this workflow, I can easily switch out revised designs to test under identical conditions. I’m still not sold on the current handle shape, and I think I can improve liquid retention by tweaking the angle of the lips. The flat bottom (Chinese style spoon) does fairly well, with it’s obtuse angle walls. Next, I will try a concave structure with a wider base for the handle and a more aggressive descending angle.

Blender: Time To Make The Donuts

I’m very happy with the results of my first project in Blender.

White Frosting

White Frosting

Classic Pink

Classic Pink

And Nekkid!

And Nekkid!

After struggling with Cinema 4D, I decided to use a plugin rather than trying to cludge together some handcrafted fluid simulation. Unfortunately, I ran into a licensing problem with Next Limit’s RealFlow plugin. I’ve been emailing back and forth all week, and they finally got a fix for me — SUCCESS! While I waited to sort that out, I decided to give Blender a try. I began following this tutorial, but quickly found myself getting lost. I needed to become more familiar with the software and interface. I decided to follow a beginner’s tutorial to get my feet wet. I choose this particular video series because it employs procedurally generated elements, and covers all the basic modeling commands, node handling, and the complete stack of scene construction. And also: donuts are delicious. The world is terrifying, and we could all use something sweet.

I am honestly shocked that Blender is a free program. Many 3D programs are horrifically expensive; without student licensing, I could never afford to touch most of the 3D tools I’ve been learning over the years. Yet Blender seems to be very capable as a 3D program. I have only scratched the surface, but it is very impressive. Now that I have familiarized myself with Blender, and have a working fluid simulator plugin for C4D, I am in good shape to begin A/B testing.