Here is how you can build a simple food recognition and nutrition estimation app using OpenAI in just 20 minutes!
How It Works
Image Encoding: The image is converted into a base64 format to be processed by OpenAI’s API.
Food Recognition Prompt: The app sends the image to OpenAI to identify food items and their respective quantities.
Nutritional Estimation: Another prompt is used to estimate the nutritional values based on the identified food items and their quantities.
Displaying the Results: The estimated calorie, protein, fat, and carbohydrate values are shown using Gradio.
It is a very simple code, which can be improved/better organized, but the idea was to show how easily it can be done to create a simple poc.
If you are working on interesting projects, connect with me on
https://www.linkedin.com/in/mayankladdha31/
from openai import OpenAI from pydantic import BaseModel import base64 from typing import List import gradio as gr def encode_image(image_path): with open(image_path, "rb") as image_file: return base64.b64encode(image_file.read()).decode('utf-8') openai_api_key = "key" client = OpenAI(api_key=openai_api_key) """pydantic models to record food items and nutrient information, not necessary but helpful if you intend to create apis or use the data in other ways. """ class Food(BaseModel): name: str quantity: str class Items(BaseModel): items: List[Food] class Nutrient(BaseModel): steps: List[str] reasons: str kcal: str fat: str proteins: str carbohydrates: str def recognize_items(image): """This function takes an image and returns a list of recognized food items along with their count and the nutrition. """ #first recognize items and quantities messages = [ { "role": "user", "content": [ { "type": "text", "text": f"You are an expert in recognising individual food items and their quantity. Give count(number) for countable items and an estimate for liquid/mixed or non countable items. For example if you have one burger,two pastries, 2 pav, bhaji and dal in an image, you return burger,pastry,pav, bhaji and dal along with the count or estimates without any duplicates. For non countable items give an estimate in grams while explaining like 'looks 1 teaspoon of sauce, so around 5-8 grams' or 'looks 1 serving of bhaji, so around 150-200gms'. Given the image below, recognise food items with their quantity.", } ], } ] base64_image = encode_image(image) dic = { "type": "image_url", "image_url": { "url": f"data:image/jpeg;base64,{base64_image}", "detail": "low" }, } messages[0]["content"].append(dic) response = client.beta.chat.completions.parse( model="gpt-4o-mini", messages=messages, response_format=Items, max_tokens=300, temperature=0.1 ) foods = response.choices[0].message.parsed res = "" for food in foods.items: res=res+food.name+ " "+food.quantity+"\n" #now estimate nutrition, we can use a separate model for this task messages = [ { "role": "user", "content": [ { "type": "text", "text": f"You are an expert in estimating information regarding nutririon given the food items and thier quantities. Think step by step considering the given food items and their quantities, and give an estimated range(lowest - highest) of kcal, range(lowest - highest) of fat, range of proteins(lowest - highest) and carbohydrates(lowest - highest). Ignore contributions from minor items. Ensure your estimations are solely based on the provided quantities. Return steps,reasons and estimations if this food was consumed. \n\nfood and quantity consumed by user: {res} \n\n.", } ], } ] dic = { "type": "image_url", "image_url": { "url": f"data:image/jpeg;base64,{base64_image}", "detail": "low" }, } messages[0]["content"].append(dic) response = client.beta.chat.completions.parse( model="gpt-4o-mini", messages=messages, response_format=Nutrient, max_tokens=500, temperature=0.1 ) nuts = response.choices[0].message.parsed steps = " ".join(nuts.steps) res=res+"\n"+steps+"\n\ncalories: "+nuts.kcal+" \nfats: "+nuts.fat+" \nproteins: "+nuts.proteins+" \ncarbohydrates: "+nuts.carbohydrates+"\n"+nuts.reasons+"\n"+"*These are estimations based on image. They might not be perfect or accurate. Please calculate based on the food you consume for a more precise estimate." return res with gr.Blocks() as demo: foods=None with gr.Row(): image_input = gr.Image(label="Upload Image",height=300,width=300,type="filepath") with gr.Row() as but_row: submit_btn = gr.Button("Detect food and quantity") with gr.Row() as text_responses_row: text_response_1 = gr.Textbox(label="Detected food and quantity",scale=1) submit_btn.click( recognize_items, inputs=[image_input], outputs=[text_response_1] ) if __name__ == "__main__": demo.launch()
Top comments (3)
Food recognition and nutrition involve understanding what we eat, identifying ingredients, and analyzing how they impact our health. With the rise of technology and awareness, it’s becoming easier to track meals and make more informed dietary choices. There’s a chance this growing knowledge could lead to healthier lifestyles and more mindful eating habits. Exploring options such as the 7 brew secret menu sugar free could possibly reveal choices that align better with personal health goals or dietary needs. You might discover alternatives that satisfy cravings while supporting a balanced approach to nutrition.
Impressive implementation! Using OpenAI and Gradio for real-time food recognition and nutritional estimation is brilliant. It's a great starting point for health-focused apps. Also, consider tagging edible salt to enhance food identification scope. Great work!
Some comments may only be visible to logged-in visitors. Sign in to view all comments.