You are viewing a javascript disabled version of the site. Please enable Javascript for this site to function properly.
Go to headerGo to navigationGo to searchGo to contentsGo to footer
In content section. Select this link to jump to navigation

3D, virtual, augmented, extended, mixed reality, and extended content forms: The technology and the challenges

Abstract

3D representations are a new form of information that when coupled with new technology tools, like VR, AR, MR, 3D scanning/printing, and more, offer new support and opportunities for research and pedagogy, often with dramatic leaps in capabilities and results. When combined with a pandemic, the results can be even more dramatic and valued in a variety of collaborative and higher education applications. However, as with any new technology and tools, there also can be sizable challenges that result and will need to be addressed. These include accessibility, 3D object creation, hardware capabilities, storage, and organizing tools. All represent areas where the community of users and their standards organizations, like NISO, need to move aggressively to develop best practices, guidelines, and standards to ensure these new forms of data and technology tools are widely-accessible. This paper provides a high-level overview to introduce people to the new information form, associated technologies, and their challenges.

1.Overview of VR/AR/MR/3D: What is it and why should you care?

Information comes in many forms; print, audio, and video have long been associated with libraries. But increasingly over the past five years, 3D representations have become a new piece of information that allows for new forms of pedagogy, exploration, and research in the drive to create new knowledge in the human mind. The creation of 3D objects, which in the past required expensive tools, has now become something that anyone with a hand-held device can engage in. Vast repositories of 3D objects have emerged that are increasingly coupled with metadata that allows for their access and subsequent use in traditional forms, including new and sophisticated applications known as Extended Reality (XR) tools. The X is a variable that is used to describe the assortment of immersive technologies that are available today, which includes 360-degree imagery, Virtual Reality (VR), Augmented Reality (AR), Mixed Reality (MR), and 3D content. With XR, there has been a shift from creation, transmission, and consumption of information to the creation, transmission, and consumption of experiences. These experiences can help enhance learning and foster more engagement throughout the entire research and learning processes. XR has the potential to bring people, places, and shared experiences closer together than ever before.

Virtual Reality (VR) is an entirely new and simulated world that occludes one’s vision. It is essentially a computer tricking one’s brain into believing that they are in a different space. Headsets using Six Degrees of Freedom (6DoF) tracking offer whole-room VR experiences giving users more freedom to explore locations, interact with 3D objects, and collaborate socially within that space.

Augmented Reality (AR) overlays information onto the real world with no spatial awareness. AR functions with a camera-equipped device using installed AR software that utilizes computer vision technology to display the digital object on the user’s screen. Although AR objects do not recognize or respond to physical objects in the real world, it does allow users, unlike in VR, to see their surroundings while an application superimposes digital objects into their physical space.

Mixed Reality (MR) mixes digital content with the real world and reacts and responds to one’s space. MR removes the boundaries between real and virtual engagement via occlusion where digital objects can be blocked by objects in physical space. MR users must wear a headset, but they are not isolated from their physical space, like VR users. An interesting aspect of MR is that a user’s physical space can now become their computer interface. Eventually, volumetric scanning technologies will become mainstream enabling full-scale replicas of various 3D objects, including human bodies, to be completely immersed in virtual space.

3D scanning may be riding on the coattails of 3D design and printing technologies, but it has had a long history of useful applications. For thousands of years, people have developed tools and skills to help replicate objects in the real world. For example, ancient Egyptians made 3D masks from linen and plaster and painted them with images to protect their body in the afterlife. Now, digital 3D objects can be manipulated inside VR, or overlaid onto reality using AR, understand their place and interact with real objects in MR, and rotate in 360-degrees via modern smartphones. The world is in 3D, so our content should be, too. Today, 3D scanning can be accomplished using a variety of applications that use laser, stereo vision, photogrammetry, time of flight (ToF), Light Detection and Ranging (LiDAR), TrueDepth, Light Field, volumetric, and other technologies. 3D objects, such as a heart, skull, molecules, and embryos, are notoriously difficult for students to envision via traditional learning methods (e.g., models). Adding XR course-related activities can help improve student learning and make it more transformational so students can better visualize processes that are normally very difficult to conceptualize. As time marches on, animated 3D objects will start to show up in XR applications adding to an even more realistic experience.

Because of the COVID-19 pandemic many people have started working from home to help combat the spread of the virus. The combination of a sense of immersion and interactivity in XR is called telepresence and several companies are developing applications to accomplish this type of location/device-agnostic collaboration. This type of collaboration has serious potential for distance learners and other collaborative projects to give users innovative opportunities to work together no matter where they are physically located. There will be no time and space limits as the physical world continues to blend, almost seamlessly, with the digital world.

In early 2020, Qualcomm announced Snapdragon XR2, the world’s first 5G-compatible XR chipset with significant CPU/GPU/AI processing upgrades to include 3K display resolution per eye. With this type of chipset, XR-compatible smartphones powered by 5G networks utilizing split-rendering technologies where high-end processing occurs over-the-air will have the potential to bring high-quality digital 3D content directly to mobile devices/smart glasses that cost much less than current XR headsets (e.g., HoloLens, Magic Leap One).

What we are seeing is important for libraries and librarians for many reasons, including:

  • The realization that 3D technologies are just a new information format and therefore belong in the library.

  • XR technologies can improve learning.

  • The technologies are rapidly improving and becoming more cost effective because they are generating a high return for the expenditures while acquisition costs are decreasing.

  • We are seeing steady rates of adoption in colleges and universities.

  • Libraries tend to offer access to these technologies on a non-preferential basis, i.e. to all members of their community and frequently at little or very low costs.

  • Libraries, as omni-disciplinary, collaboration centers on the campus, allow the very best people and ideas to work together to maximize the benefits of the technology.

2.Use cases

One of the questions frequently asked when talking about extended realities and its application in libraries and higher education, is where is this type of work happening? The answer is all over the world and all over within universities. Coherent Digital did an informal survey and found the technology was being used in:

1. Archaeology (Oxford University)

2. Architecture (St. Thomas University and University of Oklahoma)

3. Anthropology (Duke University)

4. Art & Design (Carnegie Mellon University)

5. Biology (Arizona State University and University of Oklahoma)

6. Chemistry (University of Oklahoma)

7. Dance (University of Minnesota, Duluth)

8. Education (University of North Texas)

9. Engineering (Lewis & Clark Community College)

10. Journalism (University of Southern California)

11. Law (University of Oklahoma)

12. Mathematics (Stockton University)

13. Medicine (Rowan University and University of Oklahoma)

14. Physics (Virginia Tech University)

15. Psychology (Stanford University)

16. Sculpture (Franklin Institute, Philadelphia, PA)

17. Theatre (University of Kansas)

18. Urban Studies (University of Duluth, Kansas)

19. Virtual Tours (Boise State University)

As was stated in the report, 3D/VR in the Academic Library, published by the Council on Library and Information Resources (CLIR), “Developing these new skills and collaborations around emerging technologies such as 3D/VR can potentially enhance the profile and maintain the relevance of the academic library both as the custodian and curator of all forms of research and educational data, and as a catalyst for innovation in scholarship and pedagogy at the heart of the twenty-first-century university” [1].

Bypassing this opportunity would not serve libraries well whereas embracing it will. That is not to say doing so is without challenges. In fact, the next section covers that very topic.

3.The challenges: What are they and how can working through NISO help address them?

At the NISO Plus conference, one of the goals of our session was to identify areas, within the scope of NISO capabilities, where NISO could help address the challenges that these technologies create. Together with the audience, the following opportunities were noted:

Accessibility. A recent article in the Journal of Interactive Technology and Pedagogy noted that: “According to the Center for Disease Control, 26% of adults in the United States have a disability” and that according to the U.S. Department or Education, National Center for Education Statistics (2019), found that “19.4% of undergraduates and 11.9% of graduated students have some form of disability”. The authors further noted: “There are limits to making Virtual Reality (VR) accessible. The reality is that there will be students who are unable to use VR for a variety of reasons. Therefore, there should always be an alternative access plan developed so that students have access to non-VR learning materials as well” [2]. There are other questions that also need to be answered in planning for accessibility, such as:

  • . 1. Will people need to print the object for touching it?

  • . 2. Do you have the right to create the object?

  • . 3. Will the user want to print the object and will they have that ability?

  • . 4. Will the object need closed captioning? and

  • . 5. Will an audio track be needed for the visually-impaired?

Virtually all of these items have many different approaches to date and thus would benefit from guidelines, best practices, and/or standards creation.

Needed Tools: Working in the extended realities brings the need for new and different tools for the faculty, researchers, instructors and students. Some examples would include:

  • Standardized measuring tools

  • Annotation tools

  • Highlighting capabilities

  • Ability to capture what the user is seeing (camera)

  • A capability to be able to navigate around the object or alternatively, to grasp the object and move it, and the

  • Capability to change lighting angle/shadow for the 3D object.

Bringing together standardized approaches that also pays attention to how the object is created, would allow for broader use of these tools and the objects.

3D Object Creation: Creating content to be used in the extended realities, either for research or pedagogy, requires one to consider a host of criteria/questions that will determine its ability to be utilized, cited, and/or accessed. Some questions to answer and issues to consider would include:

  • Has someone else already created a 3D version of an object that will meet your needs? How do you find out? There are currently no central repositories for 3D objects such as WorldCat for monographs, but there needs to be. Commercial solutions will more than likely appear, but that is not a certainty nor are the terms of access to and use of the objects that the commercial entity will offer.

  • How will the object be used? This is a critically-important question that must be answered in order to ensure that the object is created properly. Will a front-side scan be enough, or will the users want to pick the object up in 3D and turn it around, examine it, possibly look inside of it (buildings?). Answers to all of these will help define how the object will need to be created.

  • Ownership / Access rights. If the object that you want to scan exists it may have copyright restrictions that will need to be observed. If you are creating a new object, then you will need to assign copyright (hopefully using Open Access licenses). If you want the object you’ve created to be accessible, widely-used, and citable you need to assign it open access rights, ensure that a rich metadata record is associated with it, and place the object in a repository for these types of objects.

Photogrammetry: Photogrammetry has become one of the primary tools for creating new 3D content. The question we pose is what is needed to expand its utilization? For instance, one needs to understand that:

  • The capabilities of the object will have a direct impact on the size of the datasets used to create it. It is possible when creating an object and saving all of the datasets that it may easily create 25–40 GB of data. And it is important to understand that in order for that object to be reusable, one needs to save all the sets that go into the creation of that object. Furthermore, one needs to maintain all the pieces to make the model replicable/reliable.

  • Complex object creation takes time, i.e. it can consume a lot of processing power and sheer computer time to create a complex object. It is not unusual for some to use a supercomputer in generating a single 3D model.

  • This work requires expertise and training for it to be done well. It also takes a skilled staff to maintain the integrity of objects. To keep their skills up to date will require constant training.

  • Models must be accurate and repeatable to be sought after and used.

Like many of the points above, there is a crushing need for best practices in this area. Just standardizing the documentary process for recording all of the components used to create a model, processes, procedures, metadata, and copyright would be a huge step forward.

4.Summary

Research in the extended realities can be measured by looking at the number of citations in Google Scholar between the years 2000 to 2018 and realize that they are growing at a steady 10% per year. A recent article in Campus Technology reported that the market for “augmented reality technologies will soar to $100B by 2024” [3]. Many might say: Yes, ok, the market is growing fast, but what effect will the pandemic have on this interest and demand? Many institutions will not have the resources to explore new approaches. However, as Joseph Aoun recently said: “As with most disruptive events, this one brings opportunity. The institutions that will thrive in the future will be the ones that embrace online platforms, not just for hastily-assembled, short-term replacement for classes, but long-term expansions of classroom instruction, campus life and off-campus learning” [4]. Obviously, the authors agree with that thinking.

References

[1] 

J. Grayburn, Z. Lischer-Katz, K. Golubiewski-Davis and V. Ikeshoji-Orlati (eds), 3D/VR in the Academic Library: Emerging Practices and Trends, CLIR, Arlington, VA, 2019, https://www.clir.org/wp-content/uploads/sites/6/2019/02/Pub-176.pdf, last accessed June 6, 2020.

[2] 

Jasmine Clark and Zack Lischer-Katz, Barriers to supporting accessible VR in Academic Libraries, The Journal of Interactive Technology & Pedagogy ((2020) ), https://jitp.commons.gc.cuny.ed/barriers-to-supporting-accessible-vr-in-academic-libraries/, last accessed June 6, 2020.

[3] 

David Nagel, Augmented reality market to reach $100 billion by 2024, Campus Technology ((2020) ), https://campustechnology.com/articles/2020/02/27/augmented-reality-market-to-reach-100-billion-by-2024.aspx, last accessed June 6, 2020.

[4] 

Joseph Aoun, More than bricks and mortar, The Chronicle of Higher Education ((2020) ), https://www.chronicle.com/paid-article/more-than-bricks-and-mortar/260, accessed June 7, 2020.