A Bicycle for the Mind

Amid the chaos of a world in crisis, I’ve found hope in an unexpected place: coding. With tools like Claude.ai and MCP, I’ve been building a web app to help food pantries serve their communities better—automating inventory, breaking language barriers, and streamlining processes. This isn’t just about code; it’s about turning anxiety into action, using technology to create something meaningful. If you’ve ever wondered how AI can amplify human effort, this is a story for you.

Read More

Video Lecture: 3D Modeling Basics for Beginners – Techniques, AR Tips, and Intro to AI Tools

I have some exciting news! October 23rd, 2024, I was once again invited to guest lecture at CMU School of Design. I decided to follow up with a recorded version to share. In this recording, made after the original lecture session, I cover the essentials of 3D modeling with a focus on beginner-friendly techniques. You'll find practical insights into mesh modeling, workflow tips for Blender, and an introduction to preparing models for augmented reality. The full lecture video is embedded below, followed by detailed notes that offer a step-by-step breakdown of theory and techniques for anyone new to 3D design. Dive in, explore, and start building your own 3D modeling skills.

Principles of Mesh Modeling

Note on Mesh Modeling Focus—Or Why This Lecture Focused Primarily on Mesh Modeling:

Meshes are the standard 3D model type used in real-time 3D engines—like Unity, Unreal, and virtually every AAA video game title in the last 30 years, going all the way back to Quake, by id Software in 1996.

Key Principles:

  1. Use Quad Faces Whenever Possible: Design your shape faces with quads instead of triangles and ngons.
    Reason: Quads are infinitely divisible, making it easier to adjust geometry resolution as needed. Tris and Ngons are not as flexible, which can lead to undesirable artifacts and poor topology.
    3D games primarily use triangles (tris) instead of quads because triangles are the simplest polygon shape and always planar (a flat surface), making them computationally faster to render in real-time on limited hardware, which was crucial for early gaming systems underpowered computer hardware. Essentially, triangles require less processing power to calculate and display on screen compared to quads, which have more vertices and edges.
    On modern computer hardware we can get away with more complex geometry, and it's generally a better trade-off to build mesh models from quads. That is, the computational costs are vastly outweighed by the benefits of evenly divisible face geometry and more manageable topology. Lastly, quads are easily converted into tris, by producing diagonal edges between the four vertices.

  2. Work from the Lowest Possible Polygon Count: Always start with the lowest polygon count (i.e., resolution) for your model. You can increase resolution later with subdivision modifiers, but it's not as easy to reduce the resolution later.
    Reason: Editing a high-resolution mesh is more difficult than working with a low-resolution one, which offers greater control and flexibility. It also takes much more processing power and memory, which will slow down Blender and increase the risk of crashes.

  3. Keep Base Shapes Simple: Keep your base shapes as simple as possible. When adding details, create those elements as separate objects. When you hit a milestone, consider duplicating a model or a collection of models to a new instance for further refinement.
    Reason: This approach makes 3D modeling more manageable, allowing for easier adjustments and maintaining clean geometry.

  4. Use Modifiers and Non-Destructive Editing Whenever Practical: Designing a symmetrical shape? Cut it in half and use a Mirror Modifier to cut your editing time in half. Keep in mind that the most complex designs can ultimately be derived from very basic shapes: Spheres, Cones, Toruses, and Cubes.

  5. Work From Reference Images, Even If Just A Few Basic Sketches: Press Shift + A to open the Add menu. Navigate to Image > Reference. Select the image file you want to use from your computer. The reference image will be added to your 3D Viewport, where you can position, scale, and rotate it as needed for your modeling task.

  6. Build The Overall Form First, and Then Separate into Smaller Objects: This will ensure that your designs are cohesive and edges are properly aligned. When you're ready to divide into separate objects, duplicate the objects into a new Collection.

  7. Experiment, Tinker, Explore, and Start Over: You're unlikely to get the design right on the first attempt. It's often necessary to work through the problem, and then start over from scratch once you've had enough time to explore the form. Reason: Your second draft will almost certainly be better than the first.

Blender Quality of Life Recommendations:

  1. Save Your Project Files Early and Often: Use Blender's "Save Incremental" (⌥+⌘+S) (Option + Command + S) to manage version control. Doing this will give you the freedom to fearlessly tinker and explore (as mentioned in the previous point) before settling on a final design.

  2. Crank Up The Number of Undo Steps: Open Edit from the top menu. Select Preferences to open the Blender Preferences window. In the Preferences window, click on the System tab. Scroll down to find theUndo Steps setting.

    Increase the value (the default is 32). If you have enough system memory, set it to 256 for more flexibility in undoing actions. Close the Preferences window to save your changes.

  3. Consider Using A Material Library: Blender has a basic built-in material library, but it's not very useful. Look into large libraries, such as PBR Material Asset Library + OneClick Add-on for Blender (https://shardsofred.gumroad.com/l/CfOnY). Creative Commons License (CC0) materials can be used for basically anything, and will save you time.

  4. Remember to Perform a UV Unwrap on Your Model Geometry for Best Results When Texturing: The most realistic textures in the world won't help you if your model doesn't have good UV Mapping. Remember the chocolate Santa Claus example? Proper wrapping is essential for creating realism with your models. https://docs.blender.org/manual/en/latest/ modeling/meshes/uv/applying_image.html

  5. Recommended Extensions and Add-ons:

    • VDM Brush Baker: Helps you create and bake Vector Displacement Maps directly in Blender.

    • Bool Tool: Boolean operations for complex shape creation.

    • Node Wrangler: Enhances node editing management.

    • Rigify: Automated rigging solution for character animation.

    • Loop Tools: Useful for organic modeling (with some bugs appearing

      in Blender 4.2—be sure to keep this add-on updated!).

  6. Other Useful Add-ons: Auto Mirror, F2, Extra Mesh/Curve Objects, Extra

    Grease Pencil Tools, Copy Attributes Menu, and MeasureIt.

    Bonus: Need furniture? Most of IKEA's catalog of products have 3D models available. Search for "IKEA" under Extensions and you can easily search and import 3D models into your scenes.
    Note: Ensure 'Allow Online Access' is enabled in Blender's System Preferences for add-on updates.

Create Augmented Reality Experiences for iOS with Xcode Developer Tools, Reality Composer, and USDZ File Format

Once you've finalized your form, added necessary details, and applied your materials, you should be ready to export your model.

Step-by-Step Instructions for Preparing 3D Assets for Export to USDZ:

  1. Duplicate Your 3D Assets and Collections: Create a new instance of your 3D assets specifically for export.

  2. Apply All Transforms: Hit A to select all visible objects, then press ⌘ + A (Command + A) and select All Transforms to apply.

  3. Apply All Modifiers: Apply all modifiers in the same order they were added to each model—except for subdivision, as tessellation data can (usually) be included without applying it directly to the models.

  4. Join All Components: Hit A to select all visible objects, then press ⌘ + J (Command + J) to perform a join operation.

  5. Export the File: Go to File > Export > Universal Scene Description (usd*).

  6. Configure Export Settings:

    • Include: Check Visible Only and Selected Only.

    • Blender Data: Select Custom Data.

    • Namespace: Use the default setting (UserProperties).

    • Blender Names: Enable this option.

    • File References: Set to Relative Path.

    • Convert Orientation:

      •  Z = Forward Axis

      • Y = Up Axis

        Note: Many other 3D tools, including Xcode's tools, interpret 3D models with a different axis orientation than Blender. If you don't apply this conversion, you'll find your model improperly rotated following import. If this happens to you, double-check these settings.

    • Use Settings for Render: Enable this option.

    • Object Types: Select Mesh, Volumes, Curves.

    • Geometry: Enable UV Maps, Rename UV Maps, Normals.

    • Subdivision: Set to Best Match.

    •  Rigging: Enable Armatures (if you have rigged and animated your

      model).

    • Materials: Select USD Preview Surface Network and Export Textures.

    • USDZ Texture Downsampling: Set to 1024px or up to 2048px (the

      largest size acceptable for iOS QuickLook).

  7. Update File Extension: Change the export file name extension

    from .usdc to .usdz.

  8. If no issues are encountered after export, you should be able to view your model in Augmented Reality on any iOS device. Open your exported file from iCloud, send it as an email, text, or AirDrop to another device to view.

Setting Up Xcode and Reality Composer:

The latest version of Xcode doesn't include Reality Composer, as Apple has shifted their focus to the Vision Pro. You can still access the Augmented Reality Tools for iOS devices, with some additional steps.

Step-by-Step Instructions:

  1. Download the Latest Version of Xcode 14: Download from the provided

    link: https://developer.apple.com/download/all/

    NOTE: You'll need to create an Apple Developer Account (it's free) to access the above link, or using this direct link: https://download.developer.apple.com/Developer_Tools/Xcode_14.3.1/Xcode_14.3.1.xip

  2. Extract and Rename The Older Version of Xcode: Rename Xcode.app to Xcode14.app and place it in your Applications folder.

  3. Open Terminal on Your Mac.

  4. Open the Applications Folder in Finder.

  5. Drag the Xcode14 App into Terminal: This will automatically add its path.

  6. Add to the Path: Next to the path, add: /Contents/MacOS/Xcode.

  7. Full Command Example: The command will look like:

    /Applications/Xcode14.app/Contents/MacOS/Xcode

  8. Run the Command: Press Enter to run the command.

  9. You should now have access to Reality Composer in Xcode. Click on the Xcode menu on the task bar, then click Open Developer Tool, and then click on Reality Composer.

    Learn more about using Reality Composer here: https://developer.apple.com/documentation/realitykit/realitykit-reality-composer
    Learn more about Apple Reality Kit and ARKit here: https://developer.apple.com/augmented-reality/tools/

BONUS: Generative AI and 3D

Tripo AI (https://www.tripo3d.ai/app) is an advanced generative AI tool that allows for both text-to-3D and image-to-3D model generation. This tool offers users an intuitive way to create complex 3D assets with minimal manual input, simply by describing what they need or providing a reference image.

Key features:

  • Text-to-3D and Image-to-3D Conversion: Users can input a detailed description or upload an image, and within seconds, the AI generates a draft model ready for refinement.

  • Prompt: "A pineapple-hedgehog with spiky fruit armor and leafy quills."

    https://tripo3d.ai/preview?share=9a57357e-6262-469c-afb1-c7af74d92c93

  • Prompt: "A 1980s sci-fi robot stylized as a Nintendo NES product."

    https://tripo3d.ai/preview?share=a08a55cd-9e66-48a5-be3d-85a26160e461

  • High-Speed Generation: Tripo’s AI processes are optimized for efficiency, allowing users to generate detailed models in a matter of seconds, ideal for prototyping or quick visualizations.

  • Customization Tools: After generating a model, users can adjust topology for increased details, or apply stylization, such as voxels.

  • Seamless Integration: Tripo3D supports a variety of export formats like .usdz .obj and .fbx, making it easy to import models into Blender and other software for further editing.

  • Generate full texture maps with PBRs: includes generation of PBR textures, adding even greater details beyond the geometry.

  • Automatic rigging and basic animations: Applies a basic animation rig to generated models and simple animations, such as a running character, to the model geometry.

Downsides:

  • Imprecise generation: just like AI image generators, results are unpredictable and often wrong.

  • Costs: Using this tool will require a membership plan, and has limited monthly credits, which limits usage.

CREDITS:

Thanks to all of these wonderful educators and content creators who continue to inform and inspire me throughout my 3D journey. Preparing this lecture required lots of time and consideration for how to condense what I’ve learned over the last five years into something I could demonstrate in under 2 hours. This wasn’t easy, but I had many fantastic resources to pull from.
If I’ve left anyone out, please leave a comment so I can include them here:

YouTube Creators:

Reference Files:

Robot model created with Tripo AI

Robot model with corrected orientation

Reality Composer demo file

Interactive USDZ demo file

Note: Due to a bug, the robot walking animation doesn’t playback in QuickLook AR for iOS.

HAVE QUESTIONS? ASK PHIL

Have questions about CAD, Fusion 360, or the Portland maker scene? Ask Phil! He’s a Principal Software Engineer at Autodesk, inc. and teaches CAD at Portland Community College. He’s also the host of Community Conversations series: Getting started with 3D modeling in Fusion 360

You can reach him at phil.eichmiller@autodesk.com

Phil Eichmiller — Principal Software Engineer at Autodesk, Inc.

TUTORIAL: How to use ultra realistic Quixel Mixer materials with Fusion 360 [Part 2]

Welcome back! In Part 2, we’ll explore adding Quixel Materials to your designs in Fusion 360 and setting up a rendering scene. If you haven’t already, review Part 1 and install Quixel Mixer. You’ll want to create and export a mix for use in Fusion 360 prior to the steps in this tutorial, or download an example material set here.

First, let’s create a new project in Fusion 360:

  1. Creating a new Fusion 360 Project

After you open Fusion 360, Click “Save” and give your project a name. In this example I used “QuixelMaterialDemo.”

After you save your project, we’ll want to create a new component and make it active.

2. Create a new Component

This is generally a good practice with Fusion 360, because we can more easily manage changes made to the design when the timeline is broken up by individual component histories. Name your component “Floor” and then make sure “Activate” is selected (should be by default), click “OK” to continue.

Next, we’ll want to create a sketch to define the floor’s dimensions. Click “Create” and make a Center Rectangle on the bottom plane.

3. Create a Floor

Make your sketch 3 meters x 3 meters in size, with the Origin at the center. Click “Finish Sketch” to continue. If you’ve done everything right, then you should have a sketch that is fully constrained (i.e., you’ll see black lines instead of blue lines for the outer dimensions of your sketch).

Next, we’ll extrude the sketch below the plain. This will create a new body, based on our sketch dimensions.

Click Create and then Extrude. Then, extrude the sketch -1mm below the plane and click “OK.”

Next, Save the design. You’ve created your first body and now would be a good time to save your progress.

Note the reason for your save and Click “OK.”

Next, we’ll want to change the Appearance of our floor. Click Modify Appearance to bring up the Appearance Window.

4. Add material

Here we can see the default material for the Floor body. We’ll want to replace that material with our Quixel Mix. To do that, let’s start by downloading a similar material.

Note: in general, you’ll find it is easier to add Quixel Mixer materials when you adapt an existing material in Fusion 360 with similar attributes. In this case, we can use the existing Asphalt Material.

After the download finishes, click and drag the Asphalt material into your design.

We can then replace the default material with the Asphalt.

5. Replace Fusion 360 Material with Quixel Mix

Next, we can begin modifying the Fusion 360 Asphalt material with the Quixel Mix.

As mentioned in Part I, the materials in Fusion 360 are made up of individual map image files:

Albedo/Diffusion/Color — the color a material reflects

Normal and/or Height Maps — the bumps and imperfections along a surface

Roughness — the smoothness of a surface (ranging from a sharp reflection to fuzzy/diffuse)

Reflectance/Specular/Metalness — the reflectiveness of a surface (ranging from mirror finish to a dull surface)

Anisotropy/Ambient Occlusion — the shadows along a surface

Refractive —how light bends through a surface

Emissive — how much light a surface emits (glow)

Translucency/Opacity — how transparent a surface is to light

If you’re using the included sample images, you’ll find some but not all of these maps. Depending on what materials you’re mixing, you’ll need different image maps. The sample image package includes:

Floor_Diffuse.png — Color (placed in Parameters)

Floor_Roughness.png — Roughness (placed in Parameters)

Floor_Specular.png — Reflectance (placed in Parameters)

Floor_Normal.png — Normal (placed in Relief Pattern (Bump))

Floor_AO.png — Anisotropy (placed in Advanced Highlight Controls)

By replacing and adding these map files to the Fusion 360 Asphalt material, you can transform it to the Quixel mix. To start this replacement process, open the Appearance window, double-click the Asphalt material and then click “Advanced…”

Rename the material to “Quixel_Asphalt” to distinguish the material from the original Fusion 360 Asphalt.

Under Parameters, we can add three (3) image maps. First, we’ll apply the diffusion/color map to the Image input in Fusion 360. Click on the Image filename 1_mats_surface_asphalt_color.jpg and navigate to your replacement images.

Select your Albedo/Color/Diffuse map file. If you’re using the sample images, it’s the file named Floor_Diffuse.png. Click Open to replace the default image file.

Next, we’ll repeat the process with the Reflectance and Roughness maps. By default, these two material attributes are set as Slider values, click the drop down arrow and then select Image to replace the slider value with an image map.

Next, select the Metallic/Specular image map if you’re using the sample images, select Floor_Specular.png and click Open.

Next, repeat the same steps for the Roughness value. Select Image and then select your Roughness Map. If you’re using the sample images, select the Floor_Roughness.png.

Now that we’ve completed the three Parameter maps, we can move on to the Relief Pattern (Bump) map. Once again, we’ll replace the default image file (1_mats_surface_asphalt_.jpg) associated with the material. Note: Fusion 360 supports both bump and normal maps. If you want to know more about these two approaches to texturing a 3D model, then click here.

Next, we need to change the Relief Pattern from a Height Map to a Normal Map. To do this, we need to Edit the image.

Next, scroll down to Advanced and change Data Type to Normal Map.

Next, we need to ensure that all of our maps are using the same Sample Size. Be sure to repeat this step for all image maps. We also need to ensure that all of our Maps have Linked texture transforms. Check the Link texture transforms under the Transforms section of the Texture Editor. Be sure to repeat this step for all image maps.

These steps are important, because they ensure that all of the image map data are aligned equally to the material in Fusion 360. After you’ve verified these settings, you can click “OK” to finalize the changes to this material.

Now that the material has been updated you can Close the Appearances window.

To check and validate our new material, we need to switch to the Render Workspace in Fusion 360. Click on the Workspace button, and change it from DESIGN to RENDER.

6. Test render scene

Next, let’s save the design to capture the new material settings in your Fusion 360 Timeline. Click File and Save.

Fusion 360 will prompt you to describe your save point. Let’s name this save “Quixel Material Added” and click OK.

Before we can test our new material, we need to edit the SCENE SETTINGS from the SETUP Menu. Open the SCENE SETTING Window and Click+Drag “Dry lake bed” to the Current Environment and then Click Close.

We also need to change the IN-CANVAS RENDER settings to FAST, so that we can easily see the material’s performance during rendering. To do this, click on the IN-CANVAS RENDER SETTINGS icon and Click on the Fast tab. Then, Click OK to update the rendering method.

Next, we can preview the rendering, and see how the various maps work together under different lighting conditions. To do this, start the In-Canvas Rendering and then open Scene Settings, click on the Position Icon to bring up the Rotation and Scale Sliders. By changing the rotation, you can see how the surface of your floor object casts shadows at different angles, corresponding to the surface material.

Make sure to save your project to retain your rendering settings. If you’ve made it this far, then congratulations! You now have all of the information necessary to import Quixel Mixer materials in Fusion 360. In Part 3, we’ll explore some techniques for applying these materials to complex geometries, and how to post-process your images for additional realness. In Part 4, we’ll take these realistic models and generate Augmented Reality experiences for iOS.

Stay tuned!

Core77 Design Awards 2022

I’m very pleased to see Stuart Candy’s project “Imagination is a Commons” is the winner for Core77’s award for Speculative Design, 2022. Back in March of 2021, I received a somewhat unusual paid request — for studio photography services. One year into the COVID-19 pandemic, when vaccines were still out of reach for many, and facilities and institutions remained shuttered, I was suffering from cabin fever isolation and grieving the death of my uncle.

I was finishing up my second Master’s degree, and Pittsburgh had only begun to thaw after a long and difficult winter. Without access to the campus photography studio, Stuart had reached out to his network at Carnegie Mellon, seeking alternatives. As luck would have it, during my undergraduate studies I invested in my own studio photography setup.

My barebones digital photography setup

Scrappy resilience was a constant theme throughout 2020. Students without studio space were constantly finding ways to make do. This was one of those rare moments where few compromises were necessary, and I had everything I needed on hand. Imagine my surprise when I was handed a bag full of artifacts from the future…

T-shirts from a coding festival in the year 2030

Social distancing and staying home (for the better part of a year) had distorted my sense of time. In the first year of the pandemic, there were days and weeks that seemed to vaporize, and weekends that lasted a month. To hold these artifacts, and to focus on them through a viewfinder, I felt as though I had stepped completely outside of time and space. This was a perfect diversion from my mundane existence, and a reminder that this too shall pass.

Thank you, Stuart and Ceda. And congratulations!

Week 3 and 4 Update: 3D content migration woes

This update is coming in late, as I’ve been trying to come up with some way to explain the difficulties I’m facing with this project, while also respecting the privacy and IP. I’m leaning heavy on metaphor, but…

Imagine there are three people sitting at a bar. One of them speaks Dutch, English, and German (but only okay German, not great). Another speaks Spanish, some English (okay-ish grammar, few nouns), and fluent German. The last person at the table is a little bit unusual. They’re a rapper from Japan, they speak fluent Japanese and a little English. As a child they were an exchange student in Germany, but they’ve forgotten most of what they learned. The rapper also insists on speaking-as-quickly-as-their-mouth-can-run. They never slow down. Speed is everything.

They all have some overlap, however imperfect, in their spoken languages, but none can understand each other perfectly. This is what it feels like to develop assets for a still-in-beta realtime engine, while leveraging a parametric modeler, an open source 3D creator tool, and adhering to standards from a professional VFX and CGI-specific 3D tool.

This week my primary focus has been on getting Blender and Unreal Engine to talk to each other. Unreal prefers FBX file format (Autodesk Maya Native). Blender can export most data in this format, but there are a few catches:

  • No shape keys

  • Subdivision data is limited to a single iteration

  • Dynamic animation can only be exported with baked physics, and is limited to vertices and face transformations (kind of, depending on what you’re doing).

Additionally, Unreal doesn’t quite understand Blender’s material system. It will still export textures and UV map data, but you’ll need to build the material blueprint to recreate whatever you have in Blender. It is far from being a 1:1 exchange.

There are also a many weird quirks:

In Blender, the default units are meters. In Unreal, the default unit is centimeters. Before exporting from Blender, you need to set Unit Scale to 0.01. If you switch units in Blender to centimeters and leave the Unit Scale at the default of 1.0, then you’ll experience strange anomalies for things like collider bodies, skeleton mesh, etc..

In naming the rigging elements (IK skeletons, etc.), you DO NOT name any bones “root,” because Unreal will assume a hierarchy which may differ from your hierarchy (parenting) in Blender. However, you may rename the Armature container to “root” to conform to Unreal’s hierarchy.

Lastly, the only rigging data that reliably translates between Blender and Unreal Engine are deform bones and baked animations/transformations.

I’ve reached a stumbling block with my current 3D character. I can rig the character to animate, and even output that data in a manner which Unreal Engine can interpret. This comes at the expense of a vital visual element that was procedurally generated. What comes next is a difficult choice:

I can either integrate procedurally generated elements into a single mesh geometry (potentially compromising some rendering performance and visual fidelity) but without having to rework existing animation and rigging, or I can attempt to recreate the procedural mesh instancing I developed inside Blender but natively within Unreal. The former will be labor intensive, but I understand the tools well enough to work consistently toward a known output. The latter involves many unknowns, but I will also gain a deeper understanding of workflows within Unreal Engine. I’m attracted to this latter option, but I don’t know if it is best for the client and their expectations.

Week 15: Final Project Update

This will be my final update for the Studio II project. I feel a complex blend of emotions as I write this. I am relieved to be done. I am also sad to know that my time with this team has come to an end. I consider myself incredibly lucky to have spent so much time working with some truly amazing designers. I don’t know if I will ever experience anything like this again, but I hope so.

Remote collaboration has few perks, and I was lucky to be working with folks who helped to make this experience so much fun

Remote collaboration has few perks, and I was lucky to be working with folks who helped to make this experience so much fun

The work we have done this week feels different for many reasons. We had to prepare something for a large and diverse audience, not all of which knew or were familiar with the context of our work. Additionally, we also needed to use this time to tie up remaining loose ends—we needed to reach an end state where our process could feel somewhat conclusive.

Our efforts were just as collaborative as ever, as we divided up the labor of our remaining tasks. I was incredibly reassuring to know my team members strengths and capabilities. Knowing who was working on a particular task was reassuring. For my part, I was busy scrubbing through a timeline in After Effects, rapidly assembling visual representations and edited footage to make a convincing newscast from the future. Considering the constraints of remote collaboration, I’m very pleased with the final product.

I have continued to ruminate about over this notion that the future is something we cannot predict, but rather something we build through imperfect knowledge. I question the power our team has to influence this process, not because I lack the confidence in our shared abilities —as I said earlier and often, I’ve been working with an amazing team— but more of a concern around consequences of inspiration. Our process was far from perfect. The vagaries of a pandemic distorted every effort. The educators we sought to connect with were terribly busy. Our own team suffered from fatigue and sleeplessness as we juggled future careers and other academic expectations. The complexity of this topic is well beyond the scope of fifteen weeks of diligent inquiry.

I cannot speak for the entire team, but I know that for me personally our exploratory research was the most intimidating phase. It was immediately clear that we were engaging in a very difficult problem. Education intersects with so many other areas of study. It is a problem of policy, culture, funding, methodologies, and it is weighed down by a history of systemic inequality and racism. Generative research methods were the biggest surprise. I was astonished by what could be gleaned through a participatory process. Including educators in the generation of concepts was exciting, and I wish we had more time to engage in this work.

Our final concepts are a reflection of many perspectives and early prototypes generated by K-12 educators

Our final concepts are a reflection of many perspectives and early prototypes generated by K-12 educators

Every phase also felt too short. We needed to move on before we could fully digest what we were learning. Nevertheless, I stand behind the work we have done, because I know it represents the best we had to offer. I’ve known that design is a messy process long before my time at CMU, but I now have a much clearer sense of what it means to engage with that mess and to assemble something coherent. This work is not easy, and it is never, ever truly complete. The deadlines for a design project function like the layers of silt in a fossil record. The strata of every layer represents a progression with no clear ending or beginning. We can always dig deeper.

I hope these artifacts will inspire others as they have inspired us.

Screen Shot 2021-05-14 at 8.14.32 PM.png

Our team has assembled a project homepage. There you will find more comprehensive information about this work, the final outcome and documentation. Check it out!

Week 14 update: The Late Edition

The final push is now upon us. This past week I’ve been working nearly around the clock with my team, pushing to bring about our future vision. One of the most labor intensive, yet rewarding parts of this project has been the production of a newscast from the future. We’ve made countless script revisions, scraped stock images, sound, footage, and crafted motion graphics elements to bring this story to life. It’s been challenging, but I’m excited to see the final results.

What’s working: our approach to generating a video is deeply grounded in research. We’re incorporating concepts generated with participants — public educators who so generously gave us their time and perspectives on the present and future state of teaching in American schools. We’re also building our story to represent several systems-level shifts, including national legislation, teachers union contracts, and individual school reforms. We used several different futuring frameworks to develop these narratives, including: cone of possibility, backcasting, STEEP+V, Multilevel Perspective mapping, affinity mapping, and worldview filters.

Concepts+ MCCC - Version 2 MLP and STEEP+V Sorting.jpg
MLP_Past.png
futurescone-cdb-4.png

This process has been anything but precise. The future is something we build, not something we predict through careful measurements of trends. Understanding this truth has been very reassuring. Now that we are approaching a conclusion, I feel as though I have been on a long drive through undeveloped territory. The daylight of exploratory research gave way to the twilight of generative research and in the pitch of night we evaluated concepts. With only one headlight, we squinted off into the distance, to read the signs. Sometimes the precipitation of a pandemic obscured everything, but we relished the intermittent moments of clarity.

Those latter kinds of moment were by far the most exciting. “Oh, oh, what if…” was a common preamble to productive yet heady conversations with peers over zoom, as we scrambled together various visual representations in Miro and Figma. 

Concepts+ MCCC - Frame 26.jpg
Concepts+ MCCC - Frame 28.jpg

This workflow has been essential to synthesizing content and a visual language for our video, which we’ve been iterating on through various stages of prototyping. I’m concerned about the overall fidelity and recognize that this will be important to suspension of disbelief for our intended audience — policymakers and various stakeholders connected to PPS must find this artifact compelling enough to act and bring these concepts into a shared reality.

Concepts+ MCCC - Frame 29.jpg
Concepts+ MCCC - Frame 30.jpg
Concepts+ MCCC - Frame 31.jpg

On the technical side, video editing and motion graphics are computationally intensive tasks. I built a beefy workstation prior to starting at CMU, and this machine has been essential to so many tasks and assignments. Nevertheless, I’ve found that this work has strained my system’s capacity. I’ve purged files to make room for temporary caching and rendering outputs. I’ve reset my router in a desperate effort to speed up the transfer of data to Google Drive, and ran my system in a barebones state to maximize resources available to Adobe CC’s memory-hungry apps.

The stress I place upon the tools I use to design are complemented by the stress I’ve applied to myself. My sleep has been intermittent. I take short naps on the couch and found myself on more than one occasion this week working through the sounds of birds before the break of dawn. These late night hours are quiet and free of distraction, but tend to make the day that follows less than appealing. I’m staying awake through this last week of lectures, but finding my mind trailing off into thoughts about the timeline and how I might optimize frame rates for nominal render times. I’m obsessed with getting this video done, but know that this pace is not sustainable.

Week 11: qualitative evaluation of concepts

Our online survey is now underway, and while this virtual format isn’t exactly like so-called “speed dating,” we are hoping that it will be able to serve a similar purpose for our research. Creating a meaningful online experience for our participants was a tall order, especially with such tight constrains. There are many risks when created a fully automated and hands-off system. Not being there to clarify or to address questions or concerns in realtime was something we needed to accept as a trade-off. In exchange, we have a dozen unique participants ranging from 2 years to 27 years of experience, and from various districts around the country.

So far, the majority of responses have been from an online community of English teachers, so our data is skewed toward this perspective. On the plus side, English teachers provide excellent written responses. To avoid the pitfalls of statistics and quantitative analysis, we designed an online survey with open text fields, and we framed our questions around hypothetical scenarios. This would provide us with reflection and insights into how teachers imagine these concepts for themselves, and what perceived deficiencies come up for them in thinking about these systems in action.

Screenshot of survey responses, exported into a CSV file

Screenshot of survey responses, exported into a CSV file

The last 24 hours in particular have been very exciting, as we finally gained access to online educator communities. This process has been slower than wanted, but we first needed to fully develop our survey before we could deploy it. This process in and of itself was a design challenge. 

Last weekend, we decided to use the Tripetto platform. This gave us the same logic capabilities as TypeForm, but without any additional costs. It became clear almost immediately that we would need to prototype and refine our survey before receiving teacher feedback, and this effort was highly collaborative.

With multiple teammates, it was possible to divide this task into several areas that could be worked on independently and in parallel. We first decided on a basic structure  and strategized the division of labor. Carol worked on the text/content based on a logic diagram we crafted together. While Carol crafted this outline, I created a mockup version in Tripetto. Without access to finalized concept sketches, I took some poetic license.

Screenshot of 2nd iteration prototype survey

Screenshot of 2nd iteration prototype survey

As Carol and I worked together to refine the text copy, Cat and Chris worked together to create images and descriptive text for our participants. Once all of this content was ready for Tripetto, we began doing test runs, trying to break the experiences. This revealed some quirks with Tripetto’s logic functions and some of the less apparent features.

There are a few honorable mentions; Tripetto has a lot of subtle features that we often take for granted in other online experiences. Things like placeholder text, required fields, multiple choice radio buttons, checkboxes, multi and single-line text boxes. During the refinement phase, these features became essential and it was exciting to discover them—only after they were deemed essential enough to be worth the effort.

The minimalist UI of Tripetto made these features less evident, but not too hard to locate or execute. From start to finish, this experience felt a little shaky and uncertain but viable.

TripettoPrototypeFinal.png

I often found myself this week grinding away on the platform, slipping into a state of mind that Mihaly Csikszentmihalyi describes as “flow.” In other words, creating a survey on Tripetto wasn’t easy to use, but just challenging enough to keep me interested in working through obstacles. I think that what helped support this effort the most was building models within platforms where everyone on the team is already fluent. For us, this was primarily Miro, Google Docs, and Sheets.

Screenshot of two representations of the survey, carried across platforms (Miro and Sheets)

Screenshot of two representations of the survey, carried across platforms (Miro and Sheets)

First impressions matter, and we didn’t want to put out anything that wasn’t necessarily a work in progress. Even with this in mind, we did have a few last minute tweaks as we adapted our survey to maximize pulling power with other social media environments.

Arnold Wasserman’s desk critique was incredibly valuable for our team, as his feedback helped us to consider the importance of our survey as a communication tool. He recommended that we make the implicit, explicit, to directly communicate to our participants what we expected and why. We were encouraged to explain what questions we were asking, and to share this openly. This kind of transparency can be tedious, especially in text-based systems. I took this to task and simplified statements throughout the entire experience.

This gave the survey a personality all its own; like a casual and curious friend, we asked about specifics but with little pressure. We kept things open.

Open data cannot be calculated, it must be evaluated for patterns. Next week will be a scramble to synthesize patterns and new insights as we work to finalize system concepts into well defined parameters. We hope that through this process we will also identify opportunities to produce relevant and compelling artifacts (our final output/deliverable).

It still feels like a risk to be so far into a process and to still not have a clear idea of what it is we are making. We instead draw our assurances from what we have already made: an index of relevant articles, interview notes, countless diagrams and visual representations of high-level abstract concepts and maps at almost every level of visual fidelity imaginable, hundreds of presentation slides, dozens of pages of reflective text, and months worth of slack messages, shared links, and drafted emails. We created interactive digital workshop spaces and protocols for our participants, and archives with 256-bit encryption.

When looking at the collective volume of effort from this team, it’s difficult to imagine that we wouldn’t make something meaningful in the end. Is that too optimistic? Ask me in a month.

Week 10 update: Speed dating and concept evaluation

We had a somewhat irregular week for studio II. After our presentation, our team regrouped and strategized on how we might conduct the next phase of our research. We started out with just two concepts (an ARC educator “hackathon” and a community-promoting “ARC awards” program), and while our team felt confident that these concepts were feasible and desirable to addressing our problem space, we still had a lot of open-ended questions that would require further inquiry. Additionally, we became very concerned with the potential opportunity costs of not exploring more alternatives.

To address this concern, we decided to return to our primary research and synthesize niche problem statements that my provoke additional concepts. This went extremely well, and we now have more than a dozen concepts ready for evaluation. We’re excited to get these ideas in front of educators, but this remains to be a substantial obstacle to our process.

We consulted with Hajira and Sofia about our concerns, and asked how we might convert the highly synchronous activity of “speed dating” to a more online and asynchronous form. They recommended Typefrom and while this option was appealing, it came with a few drawbacks. The ultimate dealbreaker on this platform was the price. It costs $40 to enable the features that actually make the platform more useful than free products such as Google Forms. After some digging, I found a free alternative (they literally marketed themselves as such). Tripetto offers logic and branches that will enable our team to structure paths for our survey to tailor the individual experience. This is pretty huge, considering the scarcity factors our team has struggled with since the beginning of this project.

TripettoTestSurv.png

Despite this progress and excitement for next steps, I’ve personally struggled with motivation this week. I know that a lack of regular sleep and some external stressors are partially to blame, but there are many factors contributing to this. It’s been difficult to process (cognitively and emotionally) what comes next for me.

This week, I received my cap and gown, a diploma frame, and a few other artifacts to commemorate my time at CMU. I’ve been in school since January of 2014, and I feel incredibly lucky and grateful for this opportunity. To date, academics has been my longest career. I have spent more time being a student than my entire Navy enlistment, or my time working at Intel as an engineer. Each chapter came with its own struggles, failures, and success.

Each made an indelible mark on my psyche and personality. I could never imagine in my wildest dreams that my educational path would end here, in Pittsburgh, confined to my shoebox apartment, a deadly virus burning down countless lives while I indulge in high-level theories. I owe so much for this good fortune, and I do not know how I will ever repay the world for what it has given me.

It’s not so much that I am procrastinating — I put in a lot of hours this week, especially for this project — it’s that I’m paralyzed, afraid that what we are doing is missing something vitally important yet still unnamed. I also know that 15 weeks is hardly enough time to understand potential futures and their relationships to the current state. 

It’s all crushing me down. I feel the weight of an obligation to deliver good work, yet terribly uncertain about this process. I’ve never done such intensive research before, and while I believe these theories and frameworks I’m soaking in (Worldview filters; Voroscone; Archplot structures; CLA;  Empathy mapping; Participatory, Generative, Co-design, etc.) are helpful and necessary to our work, it’s difficult to know if the way our team applies these unfamiliar methods will yield truly impactful results.

I know that this is a learning experience, first and foremost it is an invitation to fail brilliantly as we discover new ways of making, but without any prior experience with this stage, it’s so difficult to keep my chin up and to believe in my own creativity and ability.

Week 8 Update: Generative Research and Future Visions of Portland Public Schools

We began this week with a guest lecture from Adam Cowart, a PhD candidate in the transition design program. He introduced us to the concept of CLA (Causal Layered Analysis). We used this framework to better understand the landscape of our problem space at Portland Public Schools. Adam described different facets of the problem space through the lens of “litany filters.” To recognize what futures are feasible, we need to understand the triad of history, present, and future, and what elements in our landscape pull, push, or weigh down progress.

We took some time in class to reframe our insights through this framework, and began synthesis of potential elements to build a bridge toward the future vision created by Prospect Studio. This process began slowly, but after some heavy lifting we began filling out the diagram with great enthusiasm! It was refreshing to revisit our secondary research (which was already categorized under a STEEP-V framework). It was revealing to see visually how much further we have advanced our understanding of this problem space since literature review and background reading.

Outside of class, our team was busier than ever — working to adapt and overcome the obstacles we’ve encountered in our generative research phase has not been easy. I’ve struggled to support these efforts. The external factors of my personal and professional life have been an ongoing source of strain. I feel so much gratitude to the support and encouragement I’ve received from this team, and this week I felt a great deal of pressure to reciprocate.

Sample of generative research protocols

Sample of generative research protocols

This effort to pay back the generosity I received (when I needed it most) began with a complete/comprehensive draft of our protocols for generative research, and the specifications for our workshop. Working with Carol, we delivered this to the team ahead of schedule. It was necessary for us to draft new protocols and workshop exercises to include a broader audience, outside of Portland Public Schools. We found that last week was somewhat of a dead end for seeking participation from our intended stakeholders (administrators and educators at PPS).

For our workshop, we wanted to know how different stakeholders perceive their relationships with counterparts, learn what different stakeholders prioritize and why, gain deeper understanding of how educators think about the future of public education, and to explore and define preferred futures.

We conducted three separate workshop sessions with educators outside of PPS. This included neighboring districts of PPS (Gresham-Barlow), as well as out-of-state educators. This approach allowed us to glean insights regarding that which is common in the US public school system, and that which is more specific to Portland. While this adaptation is not without its risks to skewed data, it is far more preferable that to remain without any additional insights beyond our primary research activities.




Screenshots of workshop activity

Screenshots of workshop activity

This was my first experience with executing participatory design with stakeholders and it has been such a rollercoaster of emotions. Since Carol and I worked on the protocol together, it was only logical that we also create the visual and interactive components for the workshop. We iterated on our initial concept by practicing with our own team, with each member taking a turn roleplaying as a participant. This helped us to work out the kinks and refine details before putting anything in the hands of our participants.

The first workshop with a real participant was very revealing. Having access to their thought process in real time, their visual associations, priorities, and ideas about the future were peeled back in layers, digging deeper into their lived experiences than we ever got through primary research and conversational interviews. Even the generation of simple sketches gave us glimpses into their inner worlds. I now question how important it was to conduct traditional interviews in the first place. Workshops are just so much more dynamic and active than interviews, and I consistently came away feeling more connected to the participants and their experiences.

Sketches.png

This weekend was highly reflective. With new insights in hand, we spent over five hours evaluating what we discovered. There was so much for us to consider and it was only once we had the chance to pick it all apart together as a team that we could begin to make sense of it all. Many of our initial assumptions were blown out of the water. Our newfound perspective gave us a real sense of how important relationships are in the field of teaching. We also learned that technology is probably the least important factor for educators — with the exception of a desire for students to have high-speed internet at home, there was little to no interest in improving access to technology generally.

I’m still getting used to applying so many different approaches and methods so quickly.  I feel like I’m only occasionally operating with a sense of clarity. There has been prolonged fuzziness that’s difficult to describe or ignore. It seems as though new insights provoke deeper questioning, while offering little in the way of certainty. I think this is just the experience of progressively revealing collective and individual ignorance. Before learning enough to act decisively, we must first gaze into the vast abyss of what we still do not know.

Week 6: Planning and coordinating generative research

“If we have to wait for the next pandemic to bring about big change, then we’re in big trouble!”

—Peter Scupelli

This week, our team presented our exploratory research findings with clients from Prospect Studio. This was something we did a “dry run” for the week prior. The feedback we received was generally very positive. In particular, I was pleased to learn that the “ARC” concept was aligned with the client’s understanding, and they even suggested that they would adopt this terminology for themselves! There was a lot of back and forth on this concept and it was incredibly validating. By recognizing the overlap and potential integration of these attributes (Adaptive, Resilient, and open to Change), and addressing them as a single verb, and not three distinct adjectives, we’ve reframed our inquiry to reflect actions and behaviors.

Fiona appreciated that we identified the multiple roles of educators who must address their own social and emotional needs, while also supporting students. She pointed out that teachers also need tools for communication.

Collaboration Structure diagram was successful with Jenny Hoang. There was some confusion around Board members and their placement within districts. Carol was able to clarify this well for the entire team and I continue to be grateful for her contributions to the team. I’m very fortunate to be working with a team that has a nearly two year old working relationship— we’ve developed a beautiful shorthand together, and we recognize each other’s queues.

The administrators as a leverage point is something that both Jenny and Fiona resonated with, and this is promising for the next phase of our research. Jenny questioned our scope under the MLP. The national level might be too broad for some contexts, and there was a lack of distinction between state government policy makers and national/federal-level policy makers. This is something we will clarify going forward. Otherwise, the mapping of structured interactions was a huge success.

A question raised as we outlined this structure was what are the leverage points we’re considering, and what insights can we glean from the advent of COVID adaptations made to facilitate learning. We are doing a grand experiment in remote learning, but what are the lessons or takeaways from this experience?

We’re especially interested in the role technologies in facilitating communication. Video conferencing is only one small part of this. Thinking about organizational structures, we want to improve the modes and means of communication between administrators, educators, and other stakeholders. During our critique, Cat explained that open communication presents problems under a framing that leads to practical solutions. Being able to express needs for things like a mid-day break can have a profound impact on the quality of life for educators day-to-day. Jenny concurs and believes from her experience with exploring PPS that there is a lot of desire around this realm.

After our Wednesday workshop, our team met to discuss these important next steps. It was also Cat’s birthday!

Screen Shot 2021-03-12 at 15.05.09.png

We had a good time, but still got a lot of work done! We had some imbalance in the distribution of work, preparing for this presentation, and we’ve amended our team contract to (hopefully) improve delegation of future tasks. We’re also rethinking the responsibilities for team members who are not assigned facilitator or notetaker for a given week. One challenge is that some tasks end up being more involved than originally thought. When splitting up the work, it can be like cutting a pizza while blindfolded: everyone gets a slice, but there’s no guarantee that those slices will be anywhere near the same portion. In the future, when we find out that we got a too big or too small of a task, we can further split and breakdown tasks (where possible) to keep everyone productive but not overburdened.

After addressing our coordination for this next phase, we began mapping our current questions and considering what we wanted to learn. What we realized through this exercise was that almost any available method of active research could provide insights to our questions, so we simply needed to prioritize what would work best for us and work from there to design experiences that will illuminate these areas.

Screen Shot 2021-03-14 at 22.24.11.png

Our next steps will include generative research and workshops, and our hope is to gain more insights into this aspect of interpersonal and organizational communication. Through our primary research, and framing under ARC, we’ve identified a few key aspects of effective communication:

  • Problem-solving mindset

  • Active listening

  • Maintaining open communication and feeling heard

Other areas of consideration collaboration structures:

  • How do educators coordinate their efforts to bring change?

  • How do they support or hinder adaptations or changes?

  • What visions do administrators see for the future of PPS, their roles, and the roles of educators?

Peter recommended that we also consider future contexts, and think about relevant trainings and preparation. Pandemics are not frequent, but when COVID-19 arrived, there wasn’t any plan in place. This put districts in an especially difficult position—reacting to sudden change is never easy, and they had no prior practice. Other sectors (especially government sectors) often need to prepare for scenarios that are unlikely to happen but are potentially very disruptive.

Thinking about this point remind me of a very grim reality, that school shootings in the United State have become so frequent that schools began holding drills. I was a high school student in 1999, when Eric Harris and Dylan Klebold committed a horrific massacre at Columbine High School in Colorado. It’s difficult to describe what a profound impact this had on my experience with public education. Growing up as a teenager in rural Utah, the proximity to this tragic event resulted in an immediate reaction. My school began conducting “random” locker searches. Teachers and counselors began interrogating students media consumption—at that time, it was believed that playing DOOM and listening to Metallica were red flags.

As a community, what we needed were meaningful policies. Instead, we were subjected to onerous and disruptive security measures, derived from alarmist and factually inaccurate claims. Their response didn’t prevent such tragedies, but they did add to the hardship of students who were already terrorized. School shooting drills have not made today’s kids any safer, because the root cause remains unaddressed. We needed policy then, and we need policy now.

Good policy, however, is only possible when there is a clear understanding of the problem. An important role of a vision of the future is to anticipate needs before they become a crisis. This can lead to preventative policy and proactive measures. To understand the present, we need to also understand the past. To understand the future we need to understand the present. To gain deeper insights beyond interviews, we’re planning to start participants with a cultural probe diary study (this might be in their chosen format or sent daily by us) and then bring a mix of administrators and teachers into a workshop.

We’re still working out the details, but our current favored approach is the “Draw Toast” exercise.

Screen Shot 2021-03-14 at 22.29.31.png

We’re nearly done with our protocols and will be contacting our participants on Monday. I’m curious about what will be confirmed (from our exploratory stage) and what will be new or contradictory to our current understanding. We’re now focusing on something specific, but there are degrees of assumption going into this next step. I’m excited (and a little nervous) to learn more from our participants and to benefit from their lived experiences.

Brooklyn Laboratory Charter School - Designing For An Academic Year Under The Context of COVID-19

This summer has only just begun and I am now involved in two separate projects related to educational institutions and their response to COVID-19. Working with Dezudio and members of my CMU Design cohort, we are consulting a handful of teams to help them develop their strategy and documentation for Brooklyn Laboratory Charter Schools (LAB).

In the first week of this project, Brooklyn Lab teams presented their strategies for the 2020/2021 school year. There was a lot of information to sort through, and many different ways to interpret the key terms (e.g., “A” and “B” shifts, virtual, online, in-person, “brick and mortar,” traditional, etc). Additionally, all stakeholders are confronted with multiple layers of complexity. This impedes decision making and increases stress for all involved. I believe that it is highly appropriate to view these policies through the lens of a navigation system.

For students and their parents, this navigation involves when, where, how, and with whom they will receive an education. For instructors and staff, there is a question of when and where they will be in performing their most common tasks, and how they will interact with the students they serve, as well as when and where they will conduct their professional obligations beyond the classroom. For administrators and their efforts to support a highly modified school year format, there is a clear need for mapping, to help them maintain “the big picture.”

To achieve successful navigation, we may want to leverage the familiar look and feel of MTA maps, and adopt a language to reflect this navigation mindset. Instead of calling different delivery formats “shifts” we can call them “tracks” with different activities as “stations.” This metaphor can help reduce the cognitive load for stakeholders, enabling them to make decisions faster, and with more clarity.


Maps are useful for reducing cognitive load; navigating a city this size requires abstraction and timed decision making, and maps provide scaffolding for making those decisions.

Maps are useful for reducing cognitive load; navigating a city this size requires abstraction and timed decision making, and maps provide scaffolding for making those decisions.


I agree with Klaus’ assessment of the classroom diagrams: simple shapes and colors can be used to identify the most common categories (students, teachers, etc.), with a key to help reinforce the symbols’ meaning. I’ve included some sketches and prototypes from last week to show what these concepts might look like.

Concept sketch to explain the multiple channels; a student’s schedule might include a combination of in-person, alternative location, and online/in-home instruction.

Concept sketch to explain the multiple channels; a student’s schedule might include a combination of in-person, alternative location, and online/in-home instruction.

Using familiar conventions as metaphor will help parents, teachers, and students understand these new policies.

Using familiar conventions as metaphor will help parents, teachers, and students understand these new policies.

Maintaining high standards of rigorous academics is a challenge even under the most ideal conditions. Mapping the relationship between leadership, teachers, students, and the different education delivery formats.

Maintaining high standards of rigorous academics is a challenge even under the most ideal conditions. Mapping the relationship between leadership, teachers, students, and the different education delivery formats.

A key with simple colors and shapes can help readers understand the meaning of words like “Hybrid.”

A key with simple colors and shapes can help readers understand the meaning of words like “Hybrid.”

Kinetic-friendly spoon project Mega Post

That’s a wrap! It’s certainly been an interesting semester, but now I am ready to put it behind me. Reflecting on the spoon project, I have some final thoughts and observations. First, I want to thank the fine folks at CMU School of Design. From the amazing and hardworking faculty and graduate student cohort, I have had nothing less than inspiration and encouragement throughout this entire process, despite the obvious challenges of working remotely.

Rendering of sixth and final (?) spoon design. I pulled the kitchen design (Pierre Gilles) and bowl (Damogran Labs) from GrabCad.com. The spoon and coffee mug are mine.

Rendering of sixth and final (?) spoon design. I pulled the kitchen design (Pierre Gilles) and bowl (Damogran Labs) from GrabCad.com. The spoon and coffee mug are mine.

This project was divided into two parts: the first part focused on exploring different ways of prototyping and making. This was described to me as an informal way of A/B Testing for methods. The second part involved the deliberate iteration of prototypes through user testing — a challenge in the context of a global pandemic and social distancing. To make the most meaningful design choices possible given limited resources, I decided to leverage the power of physical simulation to supplement the making of physical prototypes.

There are a variety of 3D software tools that offer some degree of physical simulation. For this project, I selected Maxon Cinema 4D R20 (Educational License) and Blender as my two ways of making. I chose these because I already am familiar with Cinema 4D and understand know how to manage a workflow in that context, because Blender is open source and free for anyone to use, and both programs work under MacOS and Windows environments (my rendering workstation is a Hackintosh with multiple operating systems, which grants the flexibility to overcome certain technical limitations). My initial experiments with Cinema 4D were… not great.

My very first (and failed) attempt to simulate fluids in Cinema 4D. Carnegie Mellon University School of Design Prototyping for Interaction Spring 2020

As you can see, there are “physics” happening here, but they are not anything close to the physics of the real world. This is not “real world” physics, this is Asshole Physics:

Zachary "Spokker Jones" Gutierrez and I came up with the term "Asshole Physics" when we were discussing the game and the physics models it employed. Basically there's a lot of crap you can knock over and kick around, including dead bodies, buckets, cans, and little sections of drywall which are standing around in the middle of rooms for no obvious reason. Zachary casually mentioned, "I have made it a point to knock over every fucking thing in that game. I am living out my fantasies of being a giant asshole," and I responded by stealing his "asshole" comment and claiming that I made it up. Thus "Asshole Physics" was born.

Without more sophisticated plugins to simulate fluid, Cinema 4D R20 is only “out of the box” capable of non-newtonian semisolids. I can make stuff bump around and “squish.” I can have a 3D character micturating on the side of a building. I can create the appearance and illusion of something like a fluid, but with such restrictions, I could not realistically evaluate my spoon designs. I explored my options and found that Next Limit’s RealFlow plugin would meet my basic needs. Best of all, they offer a free 30-day trial! My initial excitement quickly waned after the plugin failed to install and activate on my system…

(This email chain is long and covers a week of back and forth with customer service. I am including the entire conversation as a way to recreate my experience. While this may not directly relate to the scope of this project, I still believe that there is value in documenting the unexpected problems that crop up when trying to do something new.)

Mail_02.png
Mail_03.png
Mail_04.png
Mail_05.png
Mail_06.png
Mail_07.png
Mail_08.png
Mail_09.png

It took a week to finally get everything sorted with the demo. During that time, I began to explore option B: Blender.

Blender is a free, powerful, open source 3D creation tool. Best of all, it includes the mantaflow fluid simulation engine (since version 2.8). I have worked with Cinema 4D on other projects, and have become fairly comfortable with the interface. Given my experience with Fusion 360, Inventor, and C4D, I knew that I would need to overcome a learning curve before I could use this software to meet my needs for this project. Fortunately, I was able to find a spectacular tutorial series for beginners.

If you want to read more about my experience with the tutorial, click here.

This tutorial was ideal because it involved exercises that helped me learn how to use the interface, and covered several different workflows. I was really impressed with Blender’s node-based material system and procedural textures. You can work stri…

This tutorial was ideal because it involved exercises that helped me learn how to use the interface, and covered several different workflows. I was really impressed with Blender’s node-based material system and procedural textures. You can work strictly with parametric modeling, or you can discretely modify mesh geometry to create highly organic and imperfect forms. I’m excited to work with Blender on future projects. It’s a very exciting time to be working in 3D.

While working through these tutorials, I began sketching and working in Fusion 360 to craft my first spoon designs for part 2 of this project. You can read more about this experience here.

Takeaways from Part 1

I really appreciated the responsiveness from the team at Next Limit. Clearly there are problems with the software’s implementation of their product’s copy protection. This is an all-too-common problem in the world of software. Programmers gotta eat just like everybody else, and we certainly should make sure that the talented and hardworking folks behind the code are able to put food on their table at the end of the day. Piracy can deprive a small business of the necessary revenue to keep the lights on, so I am absolutely sympathetic to this reality and what risks are involved when you release your software for demo purposes. Getting people to pay for something that they can easily get for free is a challenging proposition. At the same time, you cannot realistically expect to get customers to pay for software if they cannot try it first. Ultimately, this one week of back and forth with customer support was a critical loss. I never completed a side-by-side comparison of fluid simulations. While I did eventually succeed at installing and using RealFlow to do fluid simulations, (and was honestly impressed with how easy it was) I did not, however, have enough time to setup a comparable simulation to evaluate spoon designs. My trial expired about a week ago, and I see this aspect of the project as a lost opportunity. If Next Limit applied similar licensing practices as Maxon (verify it through .edu email address), they could offer an educational package of their RealFlow plugin.

Blender really came through for me. The learning curve was aggressive, but not impossible. While I found mantaflow to be a respectable and entirely capable fluid simulator, it was not without its own share of issues. I spent a lot of time making granular tweaks to improve the fidelity of my simulations, while also using the observations from my simulations to inform design decisions for my spoons in part 2 of this project.

Part 2: Design Iterations Based on User Testing

While this project required user testing and design iterations based on feedback, I decided to limit the user evaluations to address handle shape and the spoon’s overall dimensions. This was not an arbitrary decision or an excuse to focus on physical simulation of fluid dynamics (with user testing as an aside). No, this decision was based on the nature of the course from which it was assigned: Prototyping for Interaction Design. This semester I have have been focusing on designing for interaction (arguably, all designers do, at some point in their process, focus on this aspect). When thinking about the tools we use (to eat food) as a system, it is important to consider the touchpoints involved. The handle of a spoon is a non-trivial component. It can take on many forms, and naturally includes affordances. How someone holds a spoon, and how easy it is for them to use it are central to the evaluation of the design.

The iterations of design were highly generative in nature, inspired by both user evaluations and physical simulations, I maintained a homeomorphic continuity: treating the initial shape as an elastic form to be molded and reshaped to maximize performance. Knowing how a concave shape might be optimized to perform under rapid movement — I wanted to create something that would be useful, and the physical simulation of fluids facilitated a means of evaluation — is only one aspect of a more complicated interaction, and this test alone could not fully address human needs. When physical form is designed and directed to improve user interaction (and physical properties are given equal consideration), it is possible to create a truly useful tool. I realize that this is a very technical description, but it is easier to understand when properly visualized. I have rendered a compilation sequence to show how this spoon shape evolved to its final(?) form (I am still considering a physical prototyping stage for this project over the summer).

A sequence of fluid dynamics tests designed to evaluate fluid retention of concave forms. Carnegie Mellon University, School of Design, Prototyping for Interaction, Spring 2020.

Toward the latter half of this sequence, you will notice a change in colors (for both the liquids and spoons). I decided to differentiate the final rendering sequences as these were based on user evaluations. The colors chose for these final sequences are based on the color tags used for the user test:

These printouts are derived from DXF vector images exported from Fusion 360. The designs shown are oldest (top) to newest (bottom). The fifth design (blue) is rendered with a blue body and green liquid.

These printouts are derived from DXF vector images exported from Fusion 360. The designs shown are oldest (top) to newest (bottom). The fifth design (blue) is rendered with a blue body and green liquid.

I printed and mailed the paper prototype to a potential user suffering from ongoing hand tremors (my partner’s mother). I sent this without written instructions. Instead, I only provided different color tags to facilitate feedback. My user let me know that the red spoon handle was in the “Goldilocks” zone in terms of size and shape: not too big, not too small, not too curvy, not too straight. Using this feedback I constructed the sixth and final (?) form — see the first image of this post.

The user test included a direct side-by-side comparison with existing dinnerware.

The user test included a direct side-by-side comparison with existing dinnerware.

Before developing these simplified paper prototypes, I also experimented with ways of making more three-dimensional forms that could be sent in the mail. While this novel approach showed some potential, I was concerned with how user error might complicate or (even worse) bias feedback. Still, these paper prototypes helped me to better understand and interpret the scale of my 3D models.

PaperPrototype_01.jpg

Final Thoughts

This project still feels somewhat incomplete. Perhaps this is because the generative design process itself can always demand further iteration, or maybe it is because I have not yet created a physical prototype that can actually be tested as an eating instrument. Maybe it is only because there were still a few “rogue droplets” (grrrrrr) that I simply could not keep contained with the completion of my sixth iteration. Whatever the net effect might be from these various shortcomings, I am pleased with the learning opportunities that were presented throughout this exploration of design.

Were I to continue with this process, the next steps would be to 3D print the latest shape using a food-safe material (there are a few third-party vendors that offer this service). I would then ship that latest design for further user evaluation. I believe that there are still many additional iterations necessary before I could defend having created something that satisfies the criteria I set out with this project (i.e., a spoon that overcomes the challenges of involuntary muscle movements and essential tremors).

If I were to collaborate with others, I would also want to evaluate the ecological and economic impact of such a device. How might we go about manufacturing to appropriate scale? How might additional user tests with a wider audience influence the existing form? There remains many unanswered questions and a newfound respect for the power of generative design.

Bugs in the Blender

I have continued to have luck exploring the Fluid simulations in Blender, but this process has not been without its quirks. I recently encountered a strange issue related to Particle Radius settings

Particle Radius

The radius of one liquid particle in grid cells units. This value describes how much area is covered by a particle and thus determines how much area around it can be considered as liquid. A greater radius will let particles cover more area. This will result in more grids cell being tagged as liquid instead of just being empty.

Whenever the simulation appears to leak or gain volume in an undesired, non physically accurate way it is a good idea to adjust this value. That is, when liquid seems to disappear this value needs to be increased. The inverse applies when too much liquid is being produced.

What does this look like in practice? My most recent simulation actually seems to produce fluid as the scene progresses.

Nevertheless, I was able to gain critical insights into this form and will continue to iterate new designs. This is being done in conjunction with paper prototyping. These forms are less sophisticated, but still provide valuable information about how users will experience and interact with this flatware.

SpoonPrototype.jpg

Spoonfuls of updates

This week was packed full of progress on multiple projects. I received feedback for my group’s birth control information app “MyGallery.” Our work was even featured on CMU’s Design page.

Crafting an iconographic representation for the withdrawal method was my proudest moment.

Crafting an iconographic representation for the withdrawal method was my proudest moment.

I’ve continued to explore fluid simulations with Blender. I’ve ran into some technical hurdles: Blender 2.82 uses a variety of protocols to leverage GPUs for rendering and computation. It offers an AI-driven denoiser (Optix), CUDA path tracing, and OpenCL. My MacBook Pro has an AMD Radeon Pro 5500M GPU as well as the option to plug in a Radeon Frontier Edition (first generation Vega) eGPU on Thunderbolt 3. Plenty of GPU compute power in either configuration, but there is a snag: MacOS 10.15 (Catalina) has deprecated OpenCL in favor of Metal 2+. CUDA and Optix are proprietary to nVidia GPUs. Apple hasn’t shipped a Mac with nVidia GPUs since Kepler launched (GeForce 700 series). Blender supports AMD ProRender, but I found it was terribly unstable.

I could easily slip into a tangent about how unfortunate the breakup between Apple and nVidia truly is, but I will spare you.

My current workflow involves queuing some tasks to my desktop, running Windows 10. The GPUs are dual Radeon VIIs. Unfortunately, I found that rendering on Blender is unstable when both GPUs render in parallel. No problem, since I can free up the other GPU for Folding@Home (a hobby of mine that has exploded in response to COVID-19). Who would have guessed that a global pandemic would boost a distributed computing project to exascale?

Despite these obstacles of platform compatibility, I have made significant progress on my simulation-based research. It is difficult to understate how exciting this project has been for me. For some context: the ASCI Red supercomputer (at the Sandia National Laboratories) was built in 1996, and was the fastest supercomputer in the world until 2000. It was the first computer to achieve true terascale computing (one trillion floating point operations per second). I built my first terascale computer in 2013. This was shortly after leaving my job at Intel. There was something very gratifying about building a computer with a CPU I helped manufacture. GLaDOS G4 (you can see the project here, scroll down to “Everything Else”) was built with a GeForce GTX 780 GPU and Intel Core i7 4770k overclocked to 4.5 GHz. It ran nearly silent and fit inside an up cycled Apple Power Mac G4 (microATX equivalent) case.

The ASCI Red supercomputer was designed to simulate nuclear weapons tests. Today, I am using a system roughly ten times more powerful to simulate soup spilling out of a spoon. I was inspired to approach this problem by two projects. The first was a 2013 project from Portland State University (my alma mater) to make a coffee cup for zero-gravity environments. they used drop cages and 3D printing to iterate several designs until they had a shape that held liquid. “It wasn’t needed, but it was requested.”

The other project hit me right in the heart.

The S’up Spoon is the embodiment of good design. The design was inspired by deep empathy for a user’s problem, and the solution involves as little design as possible. There are few technologies in this world that we trust enough to put in our mouths. If you can make it in this space, you can make it (almost) anywhere. During the fall semester, Moira and I visited the Carnegie Museum of Art. They had an exhibition on accessibility design, and I was brought to tears by stories of innovation and vibrant improvements to quality of life for people with disabilities. Technology, at its very best empowers people to realize their fullest potential. We can easily get lost in the exhilaration of the complex, but this impulse must not dampen our ability to appreciate the elegance of simplicity. Some problems are best solved by form. I saw many incredible solutions in that exhibition, but this spoon has really stuck with me.

My goal is not to make something better, but perhaps a little bit different. The shape of the S’up spoon is intuitive, and if we had never seen a spoon before, we might conclude that it is the better design over more traditional forms. It is however, under our current cultural context, a strange thing to behold. It looks more like a wizard’s pipe or a warrior’s horn. It is beautiful and ergonomic. I do not intend to elevate those specifications. Instead, my goal is to make a spoon that is inconspicuous while still achieving similar results for users who suffer from motor movement difficulties.

How has my first design faired under simulation?

While I can certainly see the appeal of a long hollow channel, I’ve become increasingly concerned with how this shape my be difficult to keep clean. I can imagine objects getting wedged toward the back depending on what is being consumed. I have began to work on a second iteration with a more shallow channel. Still, this first iteration does fairly well. It is managing to retain most of the 15ml (i.e., 1 tablespoon) of fluid under rapid movement.

I enjoyed this simulation so much that decided to make a rendering:

I have not yet gotten back into Cinema 4D to evaluate RealFlow. Despite the challenges regarding compatibility, I am truly impressed with how powerful this open source software has become with this latest release.

Now that I have established this workflow, I can easily switch out revised designs to test under identical conditions. I’m still not sold on the current handle shape, and I think I can improve liquid retention by tweaking the angle of the lips. The flat bottom (Chinese style spoon) does fairly well, with it’s obtuse angle walls. Next, I will try a concave structure with a wider base for the handle and a more aggressive descending angle.

Prototyping – Part 2

Working with Blender has continued to go well.

I have also been looking at some of the existing solutions in this space:

KFS Easy Eat

http://www.eating-help.com

Liftware, by verily

https://www.liftware.com

EliSpoon

https://elispoon.com

Ornamin - Supportive Cutlery (Parkinson’s)

https://www.ornamin.co.uk/shop/cutlery-set?number=SW24

S’up Spoon

https://www.youtube.com/watch?v=C8nNlWw6KbA

Apex Medicine Spoon

https://www.riteaid.com/shop/apex-medicine-spoon-0233706

I have been sketching and studying these forms in consideration for my own designs.

Sketch.jpg
Sketch 1.jpg
Some shapes are unappealing because of their associations. These still deserve consideration, as they function well in this space.

Some shapes are unappealing because of their associations. These still deserve consideration, as they function well in this space.


This week I will begin iterating designs in Fusion 360. Hopefully, I will finally be able to make use of my RealFlow trial license. I’m curious to see how the “out of the box” settings function with these geometries.

Fluid Simulation in Blender

“Throw off your fears let your heart beat freely at the sign that a new time is born.” — Minnie Riperton

I’ve completed my workflow design for fluid dynamics testing in Blender. Here’s a proof of concept:

Now I just need to figure out RealFlow in C4D…

Blender: Time To Make The Donuts

I’m very happy with the results of my first project in Blender.

White Frosting

White Frosting

Classic Pink

Classic Pink

And Nekkid!

And Nekkid!

After struggling with Cinema 4D, I decided to use a plugin rather than trying to cludge together some handcrafted fluid simulation. Unfortunately, I ran into a licensing problem with Next Limit’s RealFlow plugin. I’ve been emailing back and forth all week, and they finally got a fix for me — SUCCESS! While I waited to sort that out, I decided to give Blender a try. I began following this tutorial, but quickly found myself getting lost. I needed to become more familiar with the software and interface. I decided to follow a beginner’s tutorial to get my feet wet. I choose this particular video series because it employs procedurally generated elements, and covers all the basic modeling commands, node handling, and the complete stack of scene construction. And also: donuts are delicious. The world is terrifying, and we could all use something sweet.

I am honestly shocked that Blender is a free program. Many 3D programs are horrifically expensive; without student licensing, I could never afford to touch most of the 3D tools I’ve been learning over the years. Yet Blender seems to be very capable as a 3D program. I have only scratched the surface, but it is very impressive. Now that I have familiarized myself with Blender, and have a working fluid simulator plugin for C4D, I am in good shape to begin A/B testing.

Evaluating Tools for Information Architecture

OmniGraffle for Mac

From the website:

OmniGraffle is a comprehensive, yet easy to use diagramming and drawing application. Drag and drop to create wireframes, flow charts, network diagrams, UI mockups, family trees, office layouts, and more. OmniGraffle 7 comes with plenty of features to get started in Standard. OmniGraffle Pro has everything in Standard, plus features suited specifically for folks that make a living designing or working with graphics—things like Shared Layers, Artboard Layers, Non-Destructive Shape Combinations, Blending Modes and Fill Effects, Visio support, SVG export, and more.

Weaknesses:

  • Price - even their educational license for students costs $89.99. They do offer a free trial, but it only works for 14 days

  • Compatibility - not easy to transfer projects to other platforms (i.e., Visio)

  • Learning curve - many reviews complain that it is difficult to learn how to use

xSort for Mac

From the website:

  • Visual environment simulating a table with cards (and outline view).

  • Supports open, semi-open and closed exercises.

  • Supports sub-groups (participants can put groups inside groups).

  • Control every aspect of the exercise(sorting type, cards placement, etc.).

  • Statistical results (cluster tree, distance table, etc.) updated in real time.

  • Displays individually all the info related to an individual session.

  • Easily select the sessions you want to use based on different criterias.

  • Create, read, print and export reports with a single click.

  • Lock the document so that a participant may do only one session.

  • Fully integrated with Mac (Intel and PowerPC-based Macs).

  • Price - Free

Weaknesses:

  • 32-bit only (does not work with latest version of MacOS

  • No support

  • Has not been updated in years

PowerMapper Desktop

From the website:

  • Platforms - Macintosh and Windows

  • Webcrawl - Automatically maps websites

  • Agnostic - Works in-browser and on the cloud

  • Light system requirements - works well on older computers

Weaknesses:

  • Price - $150 per license and no educational license is offered, updates require annual subscription of $37.25

  • Limited use - primarily designed for website analytics