Know Wear
Training
This full-day seminar provides a hands-on introduction to developing spatial computing applications
Using Apple's visionOS platform.
The course is targeted towards developers already familiar with Swift and iOS/iPadOS development concepts. It aims to equip them with the necessary knowledge and skills to:
Grasp the core concepts of spatial computing and its potential applications.
Navigate the visionOS development environment and its tools.
Build basic interactions and experiences using SwiftUI and RealityKit.
Explore advanced topics like physics, animations, and multiplayer experiences.
The course structure involves a mix of lectures, hands-on labs, and breaks, divided into morning and afternoon sessions:
Morning Session:
Introduction to spatial computing, visionOS, and development environment setup.
Building your first spatial UI with SwiftUI, covering syntax, basic elements, and layering within the spatial environment.
Afternoon Session:
Introduction to RealityKit, its core concepts, and placing virtual objects in the real world.
Adding interactivity with gestures and physics simulations using ARKit.
Deep dive into advanced topics like spatial audio, MaterialX, and using Reality Composer Pro to enhance your project.
The course concludes with a Q&A session, code review examples, and resource sharing for further learning.
Overall, this seminar is a valuable opportunity for developers interested in delving into spatial computing and creating immersive 3D AR experiences using visionOS.
VisionOStore
Introducting the VisionOStore, which allows customers to get a real representation of products in a store. The store has an inventory list on the right and the product in the main screen. Customers can also fully interact with the product in an immersive view. In the future, they will be able to manipulate the product with just their hands. The video also shows how customers can add items to their shopping cart, delete items, and pay with Apple Pay.
Half-day Session :: 11am - 1pm.
Native visionOS Dev
Understanding the full Swift Tech Stack:
Swift · SwiftUI · SF Symbols · SwiftData · Swift Charts · Swift Structured Concurrency · Swift Package Manager · XCTest
Articles for Swift Tech Stack: https://zoewave.medium.com/list/ios-dev-c43478948c7a
Understanding visionOS Tech Stack:
RealityKit · RealityView · ARKit · Reality Composer Pro · MaterialX · USDZ
RealityKit: A powerful framework for creating immersive 3D augmented reality (AR) experiences. It enables the creation of interactive and engaging AR worlds that seamlessly blend virtual elements with the real world.
RealityView: A SwiftUI view that seamlessly integrates RealityKit content into SwiftUI apps. It bridges the gap between RealityKit’s 3D rendering capabilities and SwiftUI’s user interface design, creating a cohesive experience.
ARKit: A framework for developing AR experiences that leverages the device’s sensors to track the user’s environment and place virtual objects realistically in the real world. It provides advanced spatial awareness capabilities for creating truly immersive AR experiences.
MaterialX: An open standard for describing materials. It provides a consistent and flexible way to define material properties, enabling developers to create realistic and visually appealing 3D objects.
Reality Composer Pro (MaterialX): A powerful tool for creating 3D content for VisionOS apps. It provides a visual editor for designing and editing 3D models, materials, and scenes, and also integrates with MaterialX for advanced material editing.
USDZ: A file format for storing and sharing 3D assets. It is a compact and efficient format that can be used to share 3D assets across various platforms, including VisionOS and other AR/VR applications. (see below for USDZ and OpenUSD)
Articles for visionOS Tech Stack: https://zoewave.medium.com/list/apple-vision-pro-visionos-development-c1f9a2863c01
Nvidia Omniverse
Nvidia Omniverse is a platform designed for building and collaborating on 3D projects. It allows developers to connect various 3D design tools and create realistic simulations. Those simulations can be used in different industries, like designing products or training robots in a safe virtual environment. Another key feature is the ability to build digital twins, which are computerized copies of real-world systems that can be used for testing and optimization.
https://developer.nvidia.com/omniverse
Nvidia Omniverse & Apple Vision Pro
Nvidia Omniverse is a platform for creating realistic 3D simulations. Apple Vision Pro is a high-resolution headset for augmented reality. Now, with new software, designers can use Omniverse to create digital twins (computerized copies of real things) and stream them directly to the Vision Pro. This lets designers see high-fidelity 3D models on the headset, which can be helpful for things like product design and factory planning.
NVIDIA Omniverse Cloud APIs
Universal Scene Description (OpenUSD) (https://www.nvidia.com/en-us/omniverse/usd/)
NVIDIA Graphics Delivery Network (GDN)
Apple Vision Pro
RTX Enterprise Cloud Rendering
SwiftUI
RealityKit
Calling all Visionary Developers: Design the Future in Spatial Computing
Explore the world of visionOS and build groundbreaking applications for Apple's next-gen mixed reality headset, Apple Vision Pro.
In this session, you'll learn everything you need to know to get started:
Master Swift, SwiftUI, and the visionOS development stack.
Unleash the power of ARKit and RealityKit to create immersive experiences.
Leverage Reality Composer Pro to craft stunning 3D content.
Gain access to exclusive resources and overcome development hurdles.
Whether you're a seasoned developer or just starting out, this session will equip you with the skills and knowledge to push the boundaries of spatial computing.
Don't miss out! Register today and join us on a journey to build the future!
Limited spots available!
Unveiling the Magic: A Recap of Our VisionOS Development Session
Are you ready to dive into the world of spatial computing and build groundbreaking applications for Apple Vision Pro? If you attended our recent session, you're well on your way! Here's a quick recap of everything we covered to jumpstart your visionOS development journey:
Mastering the visionOS Craft: We delved into the essentials of building visionOS apps, including leveraging the power of Swift, SwiftUI, and the comprehensive visionOS development stack.
Unlocking Spatial Experiences: ARKit and RealityKit were explored in detail, providing you with the knowledge to create immersive and interactive experiences that blur the lines between the physical and digital worlds.
Crafting Captivating Content: We showcased the power of Reality Composer Pro, an invaluable tool for designing and crafting stunning 3D content that will bring your visionOS apps to life.
Conquering Development Challenges: The session addressed common hurdles faced during visionOS development, providing valuable resources and insights to help you navigate the process smoothly.
This session offered a treasure trove of knowledge for both seasoned developers and those new to the world of spatial computing. By attending, you've gained the foundation to push the boundaries of what's possible and become a pioneer in the exciting realm of visionOS development.
Stay tuned for more upcoming sessions where we'll delve deeper into specific aspects of visionOS development and help you unlock your full potential as a spatial computing creator!
Notes on OpenUSD & USDZ.
Universal Scene Description (USD) and USDz are interrelated but serve different purposes
Universal Scene Description (USD)
USD is an open-source framework for describing, composing, simulating, and collaborating within 3D worlds. It's more than just a file format; it's an ecosystem for handling 3D data.
Key features:
Interchange: USD allows seamless exchange of 3D data between various software applications used in content creation (modeling, animation, rendering etc.).
Collaboration: Multiple creators can work on the same scene simultaneously, with the ability to non-destructively edit different aspects.
Scalability: USD can handle large and complex 3D scenes efficiently.
Extensibility: The framework can be extended to incorporate new data types and functionalities beyond traditional 3D graphics.
USDZ
What it is: USDZ is a compressed and optimized version of USD specifically designed for sharing 3D assets across platforms (https://forums.developer.apple.com/forums/tags/usdz). It's a more lightweight and streamlined format compared to USD.
Key features:
Sharing: USDZ is ideal for sharing 3D models across platforms.
Simplicity: USDZ files are smaller and easier to handle than USD files, making them suitable for mobile or web applications.
Preserves key aspects: Despite its compressed form, USDZ retains essential information about the 3D model, including geometry, materials, textures, and basic rigging.
Relationship between USD and USDZ
USD is the foundation that provides the core functionality for describing 3D scenes.
USDZ leverages USD but streamlines it for specific use cases, particularly sharing 3D assets in AR/VR applications or mobile environments.
You can convert a USD file to a USDZ file for sharing purposes, but not the other way around.
In summary, USD is the comprehensive framework for working with 3D scenes, while USDZ is the delivery format optimized for sharing specific 3D assets across platforms.
Omniverse USD Composer vs. Reality Composer Pro
While both tools cater to 3D content creation, their underlying toolchains differ significantly in philosophy and target audience. Here’s a more technical breakdown:
Nvidia Omniverse USD Composer:
Core Engine: Utilizes Nvidia’s open-source Omniverse Nucleus, a scalable scene description database that facilitates real-time collaboration and data streaming across geographically distributed teams.
Scene Description: Relies on Universal Scene Description (USD), an open-layered format that allows for modular scene assembly and editing. USD enables efficient management of complex scenes with references to external assets and layered edits.
3D Editing and Authoring: Integrates with various 3D authoring tools like Maya, Houdini, Blender, and Adobe Substance through USD plugins. This allows artists to leverage their preferred tools for modeling, texturing, and animation, with final assets exported as USD files.
Rendering and Simulation: Integrates with a variety of third-party renderers like Pixar RenderMan, Nvidia RTX, and physically-based simulation tools like Nvidia PhysX. This flexibility allows for high-fidelity rendering and realistic simulations tailored to the specific project needs.
Material Authoring: Offers support for industry-standard Physically Based Shading (PBS) workflows, allowing artists to create realistic materials with advanced lighting and texturing capabilities. Integration with tools like Substance Designer and Mari enables advanced material creation and editing.
Animation Tools: Provides a timeline-based animation system with keyframe editing, character rigging capabilities, and integration with motion capture data.
Apple Reality Composer Pro:
Core Engine: Leverages Apple’s SceneKit framework, a high-performance 3D scene graph API optimized for real-time rendering on Apple devices. SceneKit offers a streamlined approach for building 3D scenes for AR experiences.
Scene Description: Primarily uses SceneKit’s native scene format, which is efficient for AR development within the Apple ecosystem but less interoperable with broader 3D pipelines.
3D Editing and Authoring: Provides built-in 3D modeling tools for basic geometry creation and editing. However, for complex models, integration with external tools like Maya or Blender through USD plugins might be necessary.
Rendering and Simulation: Real-time rendering is handled directly by SceneKit, leveraging Metal graphics API for efficient performance on Apple devices. Physics and particle simulations are also supported through SceneKit’s built-in physics engine.
Material Authoring: Offers a simplified material editor with basic properties like diffuse, specular, and emissive control. For advanced materials, external tools like Substance Designer might be needed, with the final material exported for use in Reality Composer Pro.
Animation Tools: Provides a timeline-based animation system with keyframe editing capabilities. While not as robust as professional animation tools, it allows for basic animations and character rigging for AR experiences.
In Conclusion:
Omniverse USD Composer offers a powerful and highly customizable toolchain, ideal for professional 3D artists working with complex scenes and requiring interoperability across platforms. Reality Composer Pro prioritizes ease of use and real-time performance for AR development within the Apple ecosystem. Its streamlined toolchain might not be suitable for highly detailed 3D assets but excels in rapid prototyping and iteration for AR applications. Both use USD(z) as their foundation to build on.
visionOS Training @ Apple
We Spent a (2x) full day @ Apple where they reviewed our material
Swift / SwiftUI / Swift Structured Concurrency / SwiftData / visionOS
Author & Instructor
https://zoewave.medium.com/select-teaching-speaking-sessions-d2a8a75f024b
Introduction: Reactive Programming with SwiftUI and Combine Framework [iOS] (Springer)
Technical Book Editor: SwiftUI for Dummies [iOS] (Wiley)
Taught Swift at AT&T DevFest (see below).