Skip to content

Conversation

@Hellebore
Copy link

PR for implementation of MCP server for Google Forms
Closes #118

Greptile Summary

Implemented a fully featured MCP server for Google Forms with configuration, complete documentation, server logic, and tests.

  • Added /src/servers/gforms/config.yaml defining metadata and tool registration.
  • Introduced /src/servers/gforms/README.md with detailed setup, OAuth configuration, and usage instructions.
  • Developed /src/servers/gforms/main.py with asynchronous handlers for form management and API integration.
  • Added comprehensive tests in /tests/servers/gforms/tests.py ensuring tool functionality.
  • Updated /README.MD to include the new server entry in the supported servers table.
@Hellebore
Copy link
Author

This is a benchmark review for experiment bakeoff.
Run ID: bakeoff/benchmark_2025-05-02T11-38-04_v1-36-0-dirty.

This pull request was cloned from https://github.com/gumloop/guMCP/pull/119. (Note: the URL is not a link to avoid triggering a notification on the original pull request.)

Experiment configuration
review_config: # User configuration for the review # - benchmark - use the user config from the benchmark reviews # - <value> - use the value directly user_review_config: enable_ai_review: true enable_rule_comments: false enable_complexity_comments: benchmark enable_security_comments: benchmark enable_tests_comments: benchmark enable_comment_suggestions: benchmark enable_pull_request_summary: benchmark enable_review_guide: benchmark enable_approvals: false base_branches: [base-sha.*] ai_review_config: # The model responses to use for the experiment # - benchmark - use the model responses from the benchmark reviews # - llm - call the language model to generate responses model_responses: comments_model: benchmark comment_validation_model: benchmark comment_suggestion_model: benchmark complexity_model: benchmark security_model: benchmark tests_model: benchmark pull_request_summary_model: benchmark review_guide_model: benchmark overall_comments_model: benchmark # The pull request dataset to run the experiment on pull_request_dataset: # CodeRabbit - https://github.com/neerajkumar161/node-coveralls-integration/pull/5 - https://github.com/gunner95/vertx-rest/pull/1 - https://github.com/Altinn/altinn-access-management-frontend/pull/1427 - https://github.com/theMr17/github-notifier/pull/14 - https://github.com/bearycool11/AI_memory_Loops/pull/142 # Greptile - https://github.com/gumloop/guMCP/pull/119 - https://github.com/autoblocksai/python-sdk/pull/335 - https://github.com/grepdemos/ImageSharp/pull/6 - https://github.com/grepdemos/server/pull/61 - https://github.com/websentry-ai/pipelines/pull/25 # Graphite - https://github.com/KittyCAD/modeling-app/pull/6648 - https://github.com/KittyCAD/modeling-app/pull/6628 - https://github.com/Varedis-Org/AI-Test-Repo/pull/2 - https://github.com/deeep-network/bedrock/pull/198 - https://github.com/Metta-AI/metta/pull/277 # Copilot - https://github.com/hmcts/rpx-xui-webapp/pull/4438 - https://github.com/ganchdev/quez/pull/104 - https://github.com/xbcsmith/ymlfxr/pull/13 - https://github.com/tinapayy/B-1N1T/pull/36 - https://github.com/coder/devcontainer-features/pull/6 # Questions to ask to label the review comments review_comment_labels: [] # - label: correct # question: Is this comment correct? # Benchmark reviews generated by running # python -m scripts.experiment benchmark <experiment_name> benchmark_reviews: [] 
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

4 participants