We’re only joking, sort of … in this article we’ll talk about how we are starting to leverage Artificial Intelligence (AI), Machine Learning (ML) and Virtual Reality (VR) to support a programmatic approach to content creation, through the development of a Virtual Trainer solution.
Some of you may be asking why utilize a Virtual Trainer? There are a number of benefits to using such technologies, especially when you consider the fast paced development of VMware’s product portfolio, and the global reach of Livefire. A Virtual Trainer capability allows us to:
- Programmatically create content, which allows us to create and update content faster and more efficiently. As a result you, as the consumer, get the latest and greatest quicker than ever
- It allows us to use AI+ML to solve some of the language barriers we face
- It allows us to scale our offerings to the respective regions, you no longer have to wait for us to deliver in your region, we can do multiple GEOS at the same time
- It allows us to introduce new capabilities and features quickly and easily, e.g. gamification
- If we start to think about disability and inclusion, it allows us to introduce capabilities to support the needs of these communities
- Closed Caption – this is just the start
- Sign Language Support (roadmap feature)
- Color and Image adaption (roadmap feature)
- Eye gaze control (roadmap feature)
- and more…
As you can see, we can do so much more, in a manner that is agile and scalable, vs using traditional video recordings. If you want to meet Olive, watch the session below. We’ve also provided a link to launch the presentation in a separate tab, or even if you want to watch the presentation on an Oculus Quest
…when watching on an Oculus Quest, why not see if you can find the hidden easter egg …
You should definitely consider this a BETA release, this is an internally built capability with support from Amazon AWS. As a result of the beta nature of the offering, you may find some bugs or things that do not work as expected. We shall be using this as an opportunity to release pilot content to the field, which will allow us to use production level consumption data to determine what works and what doesn’t, thus allowing us to fine tune the solution further.
We plan to release some of our VLS content, as on-demand content, in this model, more details to follow.
I’d like to thank AWS Professional Services for their support in delivering this solution, specifically Rani Lian (AWS Senior Consultant) and Stephen Jenkins (AWS Senior Creative Consultant), in their efforts during the development of this MVP, as well as our very own Sandra Longoria-Garcia who provided creative guidance. We hope you find this useful, and as always, any feedback is welcome.