DEV Community

Cover image for Building a Sustainable Living Tips Generator with Next.js, Flask, and Google Gemini AI - Part 1: Backend Development
ngemuantony
ngemuantony

Posted on

Building a Sustainable Living Tips Generator with Next.js, Flask, and Google Gemini AI - Part 1: Backend Development

Introduction

In this first part of our tutorial series, we'll focus on building a robust Flask backend that integrates with Google's Gemini AI to generate personalized sustainability tips. We'll cover setting up the development environment, implementing the API endpoints, and handling AI interactions.

Project Overview

SustainAI Tips is a modern web application that helps users adopt sustainable living practices through personalized recommendations. The backend serves as the bridge between our frontend interface and the Gemini AI model, handling:

  • User input processing and validation
  • AI model interactions and prompt engineering
  • Response formatting and error handling
  • Cross-Origin Resource Sharing (CORS) configuration

Prerequisites

Before we begin, ensure you have:

  • Python 3.8 or higher installed
  • A Google Gemini AI API key
  • Basic knowledge of Python and REST APIs
  • A code editor (VS Code recommended)

Project Structure

backend/ ├── app.py # Main application file ├── requirements.txt # Project dependencies ├── .env # Environment variables └── README.md # Backend documentation 
Enter fullscreen mode Exit fullscreen mode

Step 1: Setting Up the Development Environment

  1. First, create the project directory and set up a virtual environment:
mkdir -p sustainai-web-tips/backend cd sustainai-web-tips/backend python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate 
Enter fullscreen mode Exit fullscreen mode
  1. Create and populate requirements.txt:
# Flask - Web framework for building the API flask==2.0.1 # Flask-CORS - Handle Cross-Origin Resource Sharing flask-cors==3.0.10 # Google Generative AI - Interface with Gemini AI google-generativeai==0.3.0 # Python-dotenv - Load environment variables python-dotenv==0.19.0 
Enter fullscreen mode Exit fullscreen mode
  1. Install the dependencies:
pip install -r requirements.txt 
Enter fullscreen mode Exit fullscreen mode
  1. Create .env file:
# Google Gemini AI API key # Get your key from: https://makersuite.google.com/app/apikey GOOGLE_API_KEY=your_gemini_api_key_here 
Enter fullscreen mode Exit fullscreen mode

Step 2: Implementing the Backend Server

Create app.py with the following code:

""" SustainAI Tips Backend Server This Flask application serves as the backend for the SustainAI Tips web application. It uses the Google Gemini 1.5 Pro AI model to generate personalized sustainability tips based on user location and habits. Key Components: - Flask server with CORS support - Google Gemini AI integration - Structured prompt engineering - Error handling and validation Author: Antony Ngemu Date: March 2025 """ import os from flask import Flask, request, jsonify from flask_cors import CORS import google.generativeai as genai from dotenv import load_dotenv from typing import List, Dict, Union, Optional # Load environment variables from .env file load_dotenv() # Initialize Flask application app = Flask(__name__) # Configure CORS for frontend integration # This allows our Next.js frontend to make requests to this backend CORS(app, resources={ r"/api/*": { "origins": ["http://localhost:3000", "http://127.0.0.1:3000"], "methods": ["POST", "OPTIONS"], "allow_headers": ["Content-Type"] } }) def initialize_ai_model() -> None: """ Initialize the Google Gemini AI model with API key. Raises: ValueError: If GOOGLE_API_KEY is not found in environment variables Exception: If model initialization fails """ try: api_key = os.getenv('GOOGLE_API_KEY') if not api_key: raise ValueError("GOOGLE_API_KEY not found in environment variables") genai.configure(api_key=api_key) except Exception as e: print(f"Error initializing AI model: {str(e)}") raise def construct_ai_prompt(location: str, habits: str) -> str: """ Construct a detailed prompt for the AI model. Args: location (str): User's location for location-specific tips habits (str): User's current habits for targeted recommendations Returns: str: Formatted prompt string for the AI model """ return f""" Generate practical and personalized sustainability tips for someone in {location} with the following habits: {habits}. Format the response in markdown with the following categories: 1. Quick Wins (Easy to implement immediately) 2. Sustainable Living 3. Transportation & Mobility 4. Community & Social Impact 5. Environmental Protection For each tip: - Make it specific to {location} - Include cost implications or savings - Make it actionable and practical - Consider local resources and infrastructure - Use bullet points with bold headers Start each category with a markdown header (##) and number. Format each tip as a markdown bullet point (*). Use bold (**) for tip headers. """ def generate_ai_tips(location: str, habits: str) -> List[str]: """ Generate personalized sustainability tips using Google Gemini AI. Args: location (str): User's location for location-specific tips habits (str): User's current habits for targeted recommendations Returns: List[str]: List of markdown-formatted tips organized by categories Raises: Exception: If there's an error generating tips from the AI model """ try: # Initialize Gemini 1.5 Pro model  model = genai.GenerativeModel('gemini-1.5-pro') # Generate AI response using the constructed prompt  prompt = construct_ai_prompt(location, habits) response = model.generate_content(prompt) if not response or not response.text: raise ValueError("Empty response from AI model") # Process and clean the response  tips = response.text.strip().split('\n') return [tip.strip() for tip in tips if tip.strip()] except Exception as e: print(f"Error generating tips: {str(e)}") raise def validate_input(data: Dict[str, str]) -> Optional[str]: """ Validate the input data from the request. Args: data (Dict[str, str]): Request data containing location and habits Returns: Optional[str]: Error message if validation fails, None if successful """ if not data: return "Request body is empty" location = data.get('location', '').strip() habits = data.get('habits', '').strip() if not location: return "Location is required" if not habits: return "Habits are required" return None @app.route('/api/tips', methods=['POST']) def get_tips() -> Union[Dict, tuple]: """ API endpoint to generate sustainability tips. Expects POST request with JSON body containing: - location: string - habits: string Returns: Union[Dict, tuple]: JSON response with generated tips or error message """ try: # Extract and validate input data  data = request.get_json() validation_error = validate_input(data) if validation_error: return jsonify({ 'error': validation_error }), 400 # Generate tips using AI  tips = generate_ai_tips( location=data['location'].strip(), habits=data['habits'].strip() ) return jsonify({'tips': tips}) except Exception as e: # Log error and return safe error message  print(f"Error processing request: {str(e)}") return jsonify({ 'error': 'Failed to generate tips. Please try again.' }), 500 if __name__ == '__main__': try: # Initialize AI model before starting the server  initialize_ai_model() # Run Flask development server  app.run(debug=True) except Exception as e: print(f"Failed to start server: {str(e)}") 
Enter fullscreen mode Exit fullscreen mode

Step 3: Understanding the Implementation

Let's break down the key components of our backend:

  1. Environment Setup

    • We use python-dotenv to manage environment variables
    • CORS is configured to allow requests from our Next.js frontend
    • Type hints are used throughout for better code maintainability
  2. AI Model Integration

    • The initialize_ai_model() function sets up Gemini AI
    • Error handling ensures graceful failure if API key is missing
    • Model initialization happens before server start
  3. Prompt Engineering

    • The construct_ai_prompt() function creates structured prompts
    • Categories are clearly defined for consistent output
    • Markdown formatting is specified for frontend rendering
  4. Input Validation

    • The validate_input() function checks request data
    • Empty or missing fields are caught early
    • Validation errors return clear messages
  5. Error Handling

    • Try-except blocks catch and log errors
    • User-facing error messages are sanitized
    • Detailed logs help with debugging

Step 4: Testing the Backend

  1. Start the server:
python app.py 
Enter fullscreen mode Exit fullscreen mode
  1. Test the API endpoint using curl:
curl -X POST http://localhost:5000/api/tips \ -H "Content-Type: application/json" \ -d '{"location":"San Francisco","habits":"driving car, using plastic bags"}' 
Enter fullscreen mode Exit fullscreen mode

Next Steps

In Part 2 of this tutorial, we'll:

  • Build the Next.js frontend
  • Create a responsive UI with React Bootstrap
  • Implement the tips display with animations
  • Add error handling and loading states

The complete source code is available on GitHub [].

Resources


Top comments (1)

Collapse
 
codetang profile image
atg

very good👍