Matt Cook is a philosopher-turned-emerging technologies librarian at the University of Oklahoma. His job entails bringing libraries and education into the future, dreaming up ways to meld the past with the future. He’s already developed a walking meditation tool, in use across the country, that lowers the stress levels of students as they study. He also pioneered a navigational app that can guide a new, intimidated student from their dorm room to a study room, or to a particular shelf in the library — and more — all while keeping them within the bounds of comfort (in the form of their smartphone).
As a “side project,” he’s developing a secluded 10-acre plot of mesa in New Mexico with friends. Per Mr. Cook:
“It’s raw, it’s rugged, it’s beautiful. Last time, there were rattlesnakes and we got flash-flooded on the property. It’s pretty dangerous, but it’s awesome. Gorgeous. If I may pontificate: I do this stuff that sounds cool, it’s fun to talk about. But it’s all constrained to the screen and the keyboard and the mouse. Your home life, especially in the winter, you’re surrounded by four walls, and you drive to work, theoretically, in a car, you know, a box — it’s like, there is no point in your day when your horizon is bigger than your fucking computer screen.”
Inverse spoke with Cook about the stuff that sounds cool and that’s fun to talk about: the future of education and how virtual reality will forever change the way we learn.
Does anyone else in the world share your job title — emerging technologies librarian?
Yeah, actually. For OU libraries, it’s relatively new, but it’s become a thing since technology started becoming more and more important to academic study and then scholarship in the library. Basically, there are more and more universities that employ emerging technology librarians.
One of your projects aims to incorporate virtual reality into education. Could you explain that project?
It’s a cutting-edge virtual reality system that we’re calling the O.V.A.L. (the Oklahoma Virtual Academic Laboratory). Basically, this is based on the Oculus Rift hardware. We have an open-access database where you can drag and drop your 3D model into virtual reality for networked analysis. So, you can share the experimentation, or the fly-through of your 3D model, with anybody — as long as they have the headset and the application.
What are some examples of how that’s being put to use?
What we’ve seen is people in chemistry classes flying through hemoglobin molecules. People from architecture classes doing walkthroughs of their unbuilt buildings, which would obviously be too cost-prohibitive for an undergraduate to actually build and walk through. I’m working with a guy that has extremely high-definition 3D scans of these gospel manuscripts from the Old World, from Britain. They were written on vellum in the year 700. He has them scanned in such a way that you can walk on the surface of the page as if it were a landscape, because the vellum warped, over time, from moisture.
You can have a cancer researcher — some of whom we’re already working with — that has CT or X-ray data of tumor scans that they can upload. Then, they can guide a tour from their O.V.A.L. workstation on the South Campus for a classroom of headset wearers on the undergraduate campus.
Why might this be particularly useful as an educational tool?
Your professor and your student can be anywhere. We could ship a headset for the cost of a textbook to someone in, say, Oregon, and they, as part of a massive online class, could share in the VR session with their professor in Oklahoma. It eliminates the necessity of physical proximity to your professor. Although, of course, there are some things that you can’t recreate in digital format.
If you have a 3D model, and you’re non-technical, you can walk in off the street or upload the model from anywhere, join your co-researchers or classmates in a flythrough of that data across the network. You’ll be in the same space, regardless of where you’re physically located, and able to manipulate things like scale and rotation instantaneously across the network. Any change you make, your partner will see. Whatever you’re looking at, or pointing at with your laser pointer, your partner will see. I could go on for days, but you kind of get the idea.
How else will the incorporation benefit education?
In terms of — and this is the philosopher in me coming out — the benefits are extremely clear in terms of the embodied human being. You can manipulate a 3D model right now on your cell phone if you go to the right site, but what you can’t do is, kind of, experience it, in the same way you experience an object in the world. Unless you have a system like this. So, you can essentially put the object, the virtual object, in front of your face, and you can turn your head or crane your neck or reach out with your hand and manipulate it in the same way you manipulate a physical object. This makes it a much more intuitive and efficient analysis.
The cool thing is, that’s completely distributed over the network. Any change that the guide or lead researcher makes to the object will be seen by anyone wearing the headset in real time and can share the analysis. Not only that, but a lot of these things are things you would never see except under glass. If you’re talking about an ancient manuscript on vellum, there’s no way you could hold that in your hand, or analyze it at such extreme magnification — unless you’re in this space.
It’s the best of both worlds. Not only can you analyze it in a way that’s natural and doesn’t require special tools or training, but you can also get up close and personal in a way that’s typically not allowed. For example, we could do preservation of historical artifacts being destroyed in the Middle East right now. I have thought about doing a database or archive of endangered or near-extinct species, such that you could then recreate their morphology in virtual reality for in-depth, minute fly-throughs for future generations. You could have a database of extinct animals that your entire third-grade class could fly through and see them as they were, not as they’re described in the textbook.
Do you imagine, in the future, a classroom with a bunch of kids with VR headsets?
Yeah, absolutely. We’re on the verge of scaling up already. We’re starting with the two-chair system, then we’re going to go to four-chair, then we could go to classroom size. There’s no limit, technically, to how big we could go. The limit is cost — headsets and fast computers, basically — and that is actually much lower than what the previous systems were. This is like a few thousand dollars — a very few thousand — and you can walk in off the street without any technical expertise and analyze your model across the network in real time. It’s a complete step forward in cost and accessibility. We’re at the forefront, which is exciting. Oklahoma libraries — you wouldn’t think that would be the place to go and experiment with virtual reality, but it is.
Do you have solid access to funds?
None of the projects we’ve designed or developed have actually required an extravagant amount of money, for the reasons I just described with regards to virtual reality. The software is coming down in difficulty, which is to say you don’t necessarily need to be a computer scientist to design software. Number two, the hardware is coming down in price. So we’re essentially leveraging or capitalizing on both of those factors. It’s a perfect vertex of good features that allows us to do this for a relatively low cost, especially compared to previous technologies. Everything’s already in place; it’s just a matter of putting the pieces together.
You’re talking a lot about virtual classrooms, virtual educational experiences. Are you confident that there will be a need for libraries and classrooms in the future?
Yes, definitely. The concept of a live discussion with a human being, and everything that that represents, will continue to remain valuable, especially for certain subjects.
I don’t want to get myself in trouble too much by talking about the potential imminent demise of the physical campus. But, I will say that the library is actually kind of reinventing itself in a good way, and keeping itself relevant in a good way. We’ve gone from filling the need for curation and archiving and preservation of physical texts, to doing the same thing but for digital objects.
My most recent phrasing: the lifecycle of a 3D object. You have a low-cost scanner, a structure sensor, that can capture 3D data in the real world. You can attach it to your iPhone, walk around an object, and create a 3D model. And that model can get sent, automated, to the O.V.A.L. system for analysis, which can then be sent to a 3D printer for output. In the middle, there, the library has to maintain this database of, for example, Native American artifacts, or chemical molecules that are pre-publication. In my thought, in the future, it’s not that we’ve lost our jobs, we’re just curating and preserving digital artifacts rather than physical artifacts.
What was the focus of your master’s degree?
Philosophy of mind. Specifically, extended mind and spatial cognition, visual-spatial perception.
Has that informed what you’re doing now?
Yes, absolutely. In fact, the first major project I embarked on as I was finishing that degree was the Sparq Labyrinth. It was basically an outgrowth of my thesis insofar as it was making use of the body in a library where people were getting more and more sucked into their screens and their headphones. It was an attempt to reintroduce the body into the library for relaxation purposes. Definitely theoretical in origin, if you can imagine.
What other projects have you drawn inspiration from?
I’ve been listening to Willie Watson, the former guitarist of Old Crow Medicine Show. I’ve been listening to that, like, in the bath, with some good whiskey. Although it doesn’t do me any favors when I drop my smartphone into the tub. In any case, that has proven to be very inspirational. Specifically, his “Midnight Special” cover. It hits hard.
In terms of emerging technology, what we do is actually scan the private sector for developments there, because as you can imagine they move much quicker than public education. I’m always looking and seeing what’s happening in the VR world with regard to Oculus and Leap Motion.
I’ve been reading Seneca. Can I just say, “All I do is read, all the time”? Can I just say: “Books”?
Alright, well that’s good enough. That’s where I get all of my ideas.