My background in both 3D and web development gives me a natural fascination with the one area in which they intersect: 3D on the web. So, when I had the opportunity to team up with Henk Dawson and Caitlin Esworthy on a proof-of-concept for 3D on the web, I jumped at the chance. The goal of our project was to create an interactive 3D guitar demo that showcased a few features of a Stratocaster guitar. We also wanted the whole thing to work seamlessly on both desktop and mobile.
Diving into 3D on the Web
For this project, Henk handled all the 3D work, Caitlin designed the UI, and I took care of the coding. We wanted to have a collaborative development process, so we settled on PlayCanvas as the framework to bring our project to life. PlayCanvas’s online scene editor allowed Henk to set up the scene, models, lighting, and textures in a web-based project space that I could also use to manage my code. It’s similar to what an online Unity project would be like. As you might expect, though, its feature set is a little smaller than Unity’s. For the most part, I found it a pleasure to use and would highly recommend it to others getting into 3D on the web.
The Challenges of Interactions in 3D Space
One of the biggest challenges for me as I begin writing the code for the project was understanding how to convert mouse (or touch) movement into 3-dimensional input. Usually on the web we deal with 2D input (up/down, left/right) controlling 2D elements on the screen. Using 2D input to control 2D movement is easy: it’s just a matter of directly mapping input to movement. But mapping 2D input to 3D movement is a bit more complex. Based on the user’s interactions, you need to be able to figure out things like:
- What axis the user is trying to rotate the object around, based on the 2D line their mouse (or touch) moved in? (And remember, the target object might already be rotated, zoomed in, or moved on the screen; it’s not oriented in a reliable way to the camera.)
- Is their mouse click intersecting an object in 3D space?
- Is their attempt to interact with a 3D object being blocked by another 3D object?
In order to calculate collisions (i.e. whether a user’s click/touch was intersecting an object, I made use of raycasting. The premise is fairly simple, when a user clicks on the screen the application projects a line from the position on the screen into the application, diminishing into the 3D horizon. I can then check to see if it intersected any of my pre-defined collision meshes.
By detecting the first intersected object, I could identify whether the guitar (the yellow) was blocking the pickup selector (blue). That allowed me to prevent the user from grabbing the pickup selector from behind the guitar.
As I worked through these and other challenges, I was definitely forced to use math that I hadn’t used in a long time–or in some cases, ever. Learning on the job is half of the reason I love what I do, though, so it was a fascinating process the whole way through. On some of the more challenging aspects, PlayCanvas’ forums were a great source of support.
Making it Responsive
Another challenge was learning to deal with the varying WebGL support on different devices. One more than one occasion I loaded up the project on an my phone only to find pretty serious display or interaction issues. In the end, though, I was able to iron out most of the bugs.
Putting it all Together
As Henk finished up the 3D aspects of the project and Caitlin delivered her final plans and designs for the UI, I worked on finishing up the different interactions in the project, so that the user could:
- drag the pickup selector to select the pickups the guitar will use
- press the play button to start an audio clip of the guitar playing with the selected pickups
- click on a color swatch to switch the guitar materials
- rotate, zoom, and pan the guitar
Eventually, we crossed off every to-do item and had a finished project on our hands.
My Impressions: Interactions, Training, and Entertainment
Working on the Stratocaster product demo served to get me really excited about the future of 3D on the web. At the same time, though, caused me to think long and hard about what the actual applications are.
At this point, much of product marketing on the web is done via still images or videos. If you were to go to Amazon right now, you’d rely on the product photography to help you select your purchase. In many cases, I don’t think that’s going to change. It’s just cheaper, easier, and more compatible to market via photos in many cases.
I do, however, think there are areas where a 3D product demo can accomplish things that images cannot. Consider the following examples:
- Users can press buttons on a 3D product to experience how it works (i.e. see “what button does what”).
- It can be a very effective training tool. Unlike videos, which simply instruct, a 3D interactive training tool can respond to the user’s interaction and correct the user when they make a mistake (e.g. “Nope! Before pressing the engage button you must flip the “on” switch).
- In a world where augmented reality is becoming more popular, 3D can give the user the ability to see what the product looks like in their own home.
And on top of all that, sometimes allowing users to explore your product in 3D can just make it more fun for the user. And that’s a good thing too!
In any case, with the Stratocaster product demo under our belts, the next project for Henk, Caitlin, and I was a venture into another aspect of 3D on the web: web-based virtual reality using the HTC Vive. Check back soon for my blog post detailing that adventure.