Imagine you are wearing a camera — maybe it’s in a broach in the shape of a cat or just stuck in your shirt pocket à la Her — and that all day long it’s snapping photos.
You pick up coffee at your local cafe, click. You buy a slice of pizza after work, click. You take the subway home, click. Thousands of clicks resulting in thousands of pictures that will later be processed through a prediction model, which is really a way for your computer, or your phone, or whatever the device of the future is, to get to know you.
The next time you leave work at 6:30 p.m., your device will know you’re going home and understand you take the subway, but it will let you know that your train is shit right now and you’re better off taking the bus.
This will be the future, maybe, according to new research from a coalition of roboticists and engineers from the Georgia Institute of Technology. They developed a new process to teach computers to understand and predict our daily routines, which for now, involves a wearable camera that snaps a photo every 30 to 60 seconds. At the end of the day, the camera-wearer goes through the photos and labels each image based on the activity that was happening — eating, reading, whatever the subject didn’t delete later for privacy reasons. The images are then put into a generic prediction model, essentially training a computer to understand and categorize the activities.
“From there we would ideally be able to predict what activities you are conducting to a certain degree of accuracy — experimentally we saw this was around 83 percent for two participants,” Daniel Castro, a Ph.D. student working on the project, tells Inverse.
The goal of this work is to build a system that can automatically monitor and infer human behaviors and be applied to a variety of context-aware situations — personal assistance, healthcare, energy management. Healthcare, the researchers believe, can especially be a major benefactor of this technology.
“Part of the inspiration [for the project] was that tracking what we eat can have an impact on us improving our health. If we are able to log and visualize everything we eat, we can look at ways we can improve our habits in order to improve our health,” says Castro, who co-authored the study, titled “Predicting Daily Activities From Egocentric Images Using Deep Learning.”
“A futuristic idea would be that your doctor is able to get some type of report with the amount of exercise you have been doing and your food intake, in order to give them a better idea of physiological background.”
For now, it takes data from six months of images and image labeling for the computer to understand someone’s habits and behavior. The goal eventually is for this to be able to be utilized by people on a larger scale and for the training period to be one day.
As for the wearable cameras, Castro says that he and his co-researchers are confident that the devices we wear in the future will have sensors, be it with cameras or otherwise, that will be able to understand the activities we are performing at the time we are doing them. These could remind us to do anything from taking a daily medication to offering an alternative route to work. For now, their research subjects wear a smartphone in portrait mode in a contraption similar to passport holders you wear around your neck.
The research that Castro and his peers are working on is promising, but seemingly far from ready to hit the market. But it’s a worthwhile investment — it’s estimated that the market for wearable technology will reach $70 billion by 2025. According to the market research company IDTechEx, advanced informatics as wearable devices will match the healthcare market in terms of massive financial returns by the end of the decade. A wearable device that can both track and predict your health could be a hot ticket.
“Activity tracking devices like Fitbit can tell how many steps you take per day, but imagine being able to track all of your activities — not just physical activities like walking and running,” Edison Thomaz, whose research inspired the project, said in a press release. “This work is moving toward full activity intelligence.”