0% found this document useful (0 votes)
44 views6 pages

Hector Charo

Hector Charo is a Senior Python Engineer with over 10 years of experience in backend development, AI-driven solutions, and cloud services. He has a proven track record in optimizing system performance and enhancing data processing workflows across various industries. His expertise includes Python programming, machine learning, and full-stack technologies, with significant contributions to projects at companies like MobiDev and Pizza Hut.

Uploaded by

pm27prem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views6 pages

Hector Charo

Hector Charo is a Senior Python Engineer with over 10 years of experience in backend development, AI-driven solutions, and cloud services. He has a proven track record in optimizing system performance and enhancing data processing workflows across various industries. His expertise includes Python programming, machine learning, and full-stack technologies, with significant contributions to projects at companies like MobiDev and Pizza Hut.

Uploaded by

pm27prem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Hector Charo

Senior-Python-Engineer
0693hector@gmail.com ⧫ +1 210 888 0667 ⧫ 2100 Tx 171, Cleburne, TX 76031
https://www.linkedin.com/in/superhector/

SUMMARY
Results-driven Python Backend Engineer with over 10 years of experience in designing,
developing, and deploying scalable backend solutions and AI-driven models.

Expert in Python programming with a strong background in full-stack technologies, cloud


services, and machine learning.

Adept at translating complex business requirements into efficient, high-performance


technology solutions.

Proven track record in optimizing system performance, enhancing data processing


workflows, and driving significant improvements in AI technologies across various
industries.

EXPERIENCE

MobiDev, Atlanta, Georgia (Remote) — Senior Python Engineer


July 2022 - October 2024

• Spearheaded the development and implementation of AI-driven document


processing workflows, leveraging Python, Machine Learning, NLP libraries,
Django, and AWS services to enhance accuracy and efficiency by 40%. Link
• Designed and deployed a scalable backend on AWS using services such as S3,
Lambda, API Gateway, and DynamoDB. Integrated React for frontend interactions
with a secure Django backend, ensuring seamless data flow and robust security
protocols.
• Developed Python scripts to automate data capture, validation, and classification
tasks using spaCy and NLTK. Implemented models like BERT for named entity
recognition (NER) and Logistic Regression for text classification, improving
document processing accuracy and reducing manual workload by 40%.
• Deployed backend services using Amazon Elastic Kubernetes Service (EKS),
ensuring scalable and resilient application performance. This allowed for efficient
management of containerized applications, improving deployment flexibility and
system reliability.
• Implemented CI/CD pipelines using AWS CodePipeline and CodeDeploy, optimizing
the deployment process, and enhancing system security and scalability through
automated testing and continuous integration practices.
• Enhanced backend data management using PostgreSQL for efficient handling of
structured data. Integrated AWS CloudWatch for real-time monitoring and logging,
improving system reliability and performance.
• Utilized spaCy for entity recognition, employing BERT for verifying document
content and ensuring the accuracy of extracted information. Applied Logistic
Regression with NLTK for text classification to categorize documents, enhancing the
efficiency of the processing pipeline.
• Implemented GPT-2 for text generation and summarization to assist in document
completion tasks, significantly improving the speed and accuracy of processing
large volumes of documents.

• Led the backend development and data architecture for a large-scale project
involving the classification of 21 million FCC comments, optimizing system
performance and scalability through advanced Python and AWS solutions.
• Utilized Django with AWS Lambda, Amazon S3, Amazon DynamoDB, API Gateway,
and CloudWatch. This setup significantly enhanced internal data access efficiency,
reducing retrieval times by 40% while maintaining high availability and security.
• Leveraged AWS Glue, NumPy, Pandas, Dask, and Amazon EMR (using PySpark) to
handle big data processing, resulting in a 35% reduction in processing time and
ensuring seamless integration with front-end systems.
• Developed Python scripts that automated critical ETL tasks by converting Excel
spreadsheets into SQL tables within Amazon RDS, reducing manual data handling
by 70% and improving overall data integrity.
• Utilized AWS CodePipeline with Jenkins to automate and enhance backend service
deployment, achieving a 35% improvement in deployment efficiency and
minimizing downtime during updates.
• Deployed this model using Spark NLP on Amazon EMR, integrating it with the
Django backend services. This model significantly improved the classification
accuracy of FCC comments by 30%, allowing for more accurate and efficient
processing of large-scale textual data.

Pizza Hut, Plano, Texas (Hybrid) — Python Engineer


May 2020 - June 2022

• Led the backend architecture and data processing development for Pizza Hut's
online ordering platform, enhancing operational efficiency and user experience
through advanced Python, machine learning, and cloud-based technologies. Link
• Developed scalable server-side operations using Python with Flask, improving API
response times by 30% and ensuring efficient handling of high traffic loads.
• Implemented a Matrix Factorization model (Collaborative Filtering) using the
Surprise library for personalized recommendations, which increased user
engagement by 25%. Additionally, deployed an ARIMA (AutoRegressive Integrated
Moving Average) model for demand forecasting, optimizing inventory management
by analyzing time series data and reducing waste.
• Utilized PostgreSQL with SQLAlchemy to ensure high data integrity and efficient
transaction processing, streamlining operations across the platform.
• Leveraged EC2 for hosting the backend services, S3 for static content storage, and
RDS for managing the relational database, ensuring 99.9% uptime and reducing
hosting costs by 15% through optimized resource allocation.
• Implemented CI/CD pipelines using Jenkins and Docker, reducing deployment time
by 40% and minimizing the risk of production errors in a live environment.
• Integrated OAuth 2.0 using Auth0 for secure user authentication and implemented
AES-256 encryption protocols to protect sensitive data, significantly enhancing
overall platform security.
• Used Nginx as a reverse proxy and load balancer to distribute traffic efficiently,
improving server reliability and performance during peak usage times.
• Deployed a Matrix Factorization model (Collaborative Filtering) using the Surprise
library, which is a machine learning approach tailored for personalized menu
recommendations, leading to a 25% increase in user engagement and sales.
• Implemented a Prophet model (Additive Regression Model) and an ARIMA model for
time series forecasting, which are classical machine learning methods used to
predict peak order times based on historical data. These models helped optimize
staffing and inventory, resulting in a 20% reduction in operational costs.

Develapp, Cleburne, Texas (Onsite) — Python Developer


October 2018 - March 2020

• Developed a full-stack Python web framework designed for enhanced flexibility


and extensibility, leading to more efficient development and deployment of
enterprise applications.
• Architected the framework using core components like WSGI, routing, templating,
forms, data plugins, Config, and events, emphasizing simplicity and modularity.
• Integrated technologies such as CouchDB, OpenID, App Engine, and jQuery,
creating a versatile and adaptable solution for various enterprise needs.
• Streamlined development by incorporating best practices, reducing new feature
development time by 30% through zero reinvention and optimized processes.
• Engineered consumer-focused features using Python and modern web
technologies, driving a 25% increase in user engagement for consumer-facing
applications.
• Developed and maintained applications with Python frameworks like Django,
Pyramid, and Web2py, integrated with HTML, CSS, and JavaScript.
• Designed and linked a MySQL database with a proprietary Scala-based NLP pipeline,
improving data processing efficiency by 40%.
• Enhanced text analytics and processing capabilities using Apache Spark with Scala,
leading to faster and more accurate data-driven decision-making.
• Designed and optimized backend data processing pipelines and AI frameworks,
specializing in Python, distributed systems, and Azure cloud services to enhance
the efficiency and scalability of machine learning workflows.
• Developed real-time data pipelines using Azure Event Hubs and Azure Stream
Analytics, integrating with Azure Cosmos DB for persistent storage, reducing data
processing latency by 35%.
• Implemented backend services in Python, focusing on Flask, and deployed them on
Azure App Services to build robust and scalable RESTful APIs for machine learning
models.
• Designed and deployed AI models using PyTorch and TensorFlow on Azure Machine
Learning, improving model accuracy by 20% through optimized backend processing
and seamless integration with Azure cloud infrastructure.
• Automated backend testing with PyTest and Selenium WebDriver, leveraging Azure
DevOps for continuous integration and deployment (CI/CD), leading to a 50%
reduction in manual testing time.
• Managed data processing with Pandas and Dask on Azure Databricks, ensuring
efficient handling of large datasets and adhering to PEP8 standards for code
maintainability.

BudLink, Cleburne, Texas (Onsite) — Software Developer


July 2015 - August 2018

• Optimized backend systems and machine learning workflows, leveraging Python,


distributed computing, and database technologies to enhance data processing
efficiency and operational performance.
• Developed backend Python and Bash scripts for data conversion, AMQP/RabbitMQ
messaging, RESTful API integration, and CRUD operations, streamlining system
processes.
• Architected a distributed worker framework using Celery, RabbitMQ, MySQL, and
Django, improving task distribution and reliability in service-oriented architectures.
• Led data migration from SQLite3 to Apache Cassandra, designing data models and
implementing monitoring using DSE and DataStax OpsCenter.
• Built and maintained ETL pipelines for data integration into Hadoop ecosystems
using Apache Spark, YARN, and Apache Hive, optimizing backend data processing.
• Deployed backend services to Heroku using Git for version control, ensuring smooth
and scalable deployments.
• Enhanced codebase performance with design patterns like Front Controller and
DAO, and utilized Matplotlib for backend-driven data visualization.

Hill College, Hillsboro, Texas — Software Developer Intern


May 2014 - June 2015

• Assisted in the deployment and management of backend services on AWS, including setting
up EC2 instances, managing S3 storage, and configuring RDS databases to support internal
applications.
• Participated in troubleshooting backend issues and optimizing cloud resources to improve
system performance and reliability.
• Contributed to the implementation of security best practices by automating backups and
monitoring access controls using Python and AWS services.

EDUCATION

Hill College, Hillsboro, Texas — Bachelor’s degree, Computer Science


April 2011 - March 2015

• Studied about algorithms and data structures.


• Was an ICPC Challenger, and regional contest winner (Used C/C++ for the
programming language)

HARD SKILLS
• Programming Languages : Python, JavaScript, SQL

• Frameworks and Libraries : Django, Flask, React, Apache Spark, PySpark,


ELK (Elasticsearch, Logstash, Kibana), Selenium, Matplotlib, NumPy, Pandas, SciPy,
TensorFlow, PyTorch

• Databases : PostgreSQL, MongoDB, Cassandra, MySQL, CouchDB, DynamoDB,


SQLite

• AI, ML, and DataScience : Machine Learning (ML), Deep Learning, Natural
Language Processing (NLP), Topic Modeling, Text Analytics, Sentiment Analysis,
Linear Regression, Logistic Regression, Convolutional Neural Networks (CNNs),
Data Pipelines, ETL(Extract Transform Load)

Cloud and DevOps : AWS (S3, EC2, Lambda, Glue, CloudWatch, SQS, DynamoDB, EKS,
EMR), Microsoft Azure(Functions, Cosmos DB, API Management, Data Factory, Databricks,
SQL Database, DevOps, AKS), Jenkins, Docker, Kubernetes

Other Technologies and Methodologies : Computer Science, Serverless


Architecture, RESTful API, Hadoop Distributed Computing, Algorithm Development, CI/CD,
Agile Methodology, Linux

SOFT SKILLS
• Stress Resistant

• Team Building

• Customer Service

• Time management

You might also like