DietVision Cover Photo-4.png
 

OVERVIEW

PROCESS

Competitive Analysis

Observations

Interviews

​Affinity Mapping

UX/UI Design

DURATION

7 months (2019-2020)

TOOLS

Figma

​Miro

DietVision is a mobile app that uses eating behavior pattern-recognition algorithms to support dietitians' nutrition assessment process. It also provides a space for dietitians to better communicate with clients. 

Current food image recognition algorithms often focus on either classifying a single image into one of a pre-defined set of food categories (e.g., pizza, pasta, sushi, etc.), or estimating caloric information from a photo of a single dish. However, in practice, health experts -- dietitians– often look for eating patterns and behaviors across multi-day diaries. Furthermore, people often have a variety of healthy eating goals that are not supported by simple calorie counting. Therefore, there is a need to support individuals and health experts to better understand the patterns and trends of individual eating behavior and decisions beyond recognizing foods or calorie counting of single dishes.

CHALLENGES

RESEARCH

Competitive Analysis

 

First, to better understand how existing food recognition technology is used in commercial application, I collected health apps by using the keywords such as “food tracker” and "nutrition tracker" and analyzed them by their features. I found that:

  • Current food computer vision can recognize multiple foods in one picture and provide corresponding calories and macro/micro nutrients information. 

  • Users can manually input serving sizes/quantity, cooking method (raw, spoiled, stir-fried, etc.) to modify the nutrients.

  • Most apps don't give further instructions or recommendations about the meals, while Foodvisor tells users if the food is a good choice and the inputs and benefits of the food (risk of weight gain, low in cholesterol, high-fat content, etc.)

First, to better understand how existing food recognition technology is used in commercial application, I collected health apps by using the keywords such as “food tracker” and "nutrition tracker" and analyzed them by their features. I found that:

  • Current food computer vision can recognize multiple foods in one picture and provide corresponding calories and macro/micro nutrients information. 

  • Users can manually input serving sizes/quantity, cooking method (raw, spoiled, stir-fried, etc.) to modify the nutrients.

  • Most apps don't give further instructions or recommendations about the meals, while Foodvisor tells users if the food is a good choice and the inputs and benefits of the food (risk of weight gain, low in cholesterol, high-fat content, etc.)

​Expert Observations & Interviews

To understand how nutrition experts review food photos and make recommendations for patients, I recruited 10+ dietitians based in Bloomington and Indianapolis. The research includes two parts:

  1. Observations: The participants were asked to review a 7-day food photo diary set using think-a-loud method and fill in a photo review evaluation form. I observed their overall review process, what specific information/features in the photos they were looking for, and how, when and what they annotated on the photos. 

  2. Interviews: After the first session, I asked the participants questions about their professional background and their photo review process.

Affinity Mapping

​I then used affinity mapping to synthesize the research data and get insights about what information is used in making nutrition recommendations. Each data point is color coded with yellow (dietitian's observation of the food photo sets), blue (dietitian's recommendations for clients), green (questions dietitians wanted to ask clients), purple (my observation notes).

The data was clustering into 13 categories, which helped me develop the following insights: 

  • Food group balance was more focused than  micronutrients and accurate calorie counting.

  • Food preparation method (e.g. takeout, packaged, homemade, etc.), which may indicate clients’ awareness of healthy eating and ability to access to healthy food, was recognized by packages, containers, food types and appearance. 

  • Eating time helped dietitians to identify the meal types (breakfast, lunch, dinner, or snack) and the time gap between meals.

  • Repetitive food items allowed dietitians to assume the client’s food preference and based on that proposed acceptable improvements.​

  • Lifestyles, work schedules, hunger and fullness level, emotions, motivations, medical histories cannot be known from the photos but were crucial information for personalized recommendations.

DESIGN

 

Based on the findings above, I proposed a mobile app solution that assists dietitians to identify dietary patterns by recognizing a set of food photos via computer vision and machine learning technology. The idea is not intended to replace dietitians' role in providing dietary recommendations but to save their vast amount of time on reviewing individual food photos in order to evaluate clients' need. 

With the app, dietitians can gain machine learning insights with four simple steps: create client profile, upload food photo, analyze photo set (via ML), review and provide feedbacks to the results. 

Step 1: Create client profile

On the client list page, users can view, search, filter, and manage their client list. By clicking on the “Add” button, they can create a client profile with basic info such as name, gender, date of birth, as well as medical history, and dietary information. Users can skip the second and third steps or add customized fields in the sections.

client.png

Step 2: Create food diary

On the client detail page, users will be prompted to create a new food diary. From previous research, I learned that sometimes clients send food photos via smartphones to dietitians to review prior to the appointment. Assuming that dietitians would have clients’ food diary photos in their devices, the app allows users to select and upload photos from their photo gallery. The date and time information of the photos will be retrieved to auto-generate the diary title, but users can also manually edit the title and add a description to the diary.

food diary.png

Step 3: Analyze photo sets

After users create a diary, the app will start analyzing each photo and identifying patterns in the diary. As food image recognition can be tricky under some circumstances, the app will also ask users to recognize the foods. Users can skip this process and directly go to the results page.

analyze diary.png

Step 4: Review results and provide feedbacks

Users can check the results categorized by themes such as food group balance, food preparation, eating time, fluid intake, etc. , which are things dietitians pay a lot of attention to when they review food photos. Users can also provide feedback for a specific finding to help the algorithms improve. On the other hand, photos will be categorized by different features such as meals, food groups, processed food, etc. Users can view and edit the recognition results of individual photos.

results.png
 

EVALUATION

Due to the time constraints, I didn't have the chance to evaluate the design concept with the users (dietitians). Moving forward, to develop the idea requires more conversations with the Computer Vision Engineers and dietitians. Things that can be done for a robust design include:

1. Work with the engineers to know how precise the computer can be to recognize and categorize the food photo

2. Based on the feasibility, generate the right strategy for the analysis result. 

3. Compare the computer analysis with the human analysis (by dietitians)

4. Conduct interviews to further understand dietitians' working context

 

There's still a long way to go for leveraging the potential power of computer vision and machine learning. Though it is just a small step of exploring the possibility of human-centered machine learning, I'd love to contribute to more projects like this!