code2prompt
ingest
| code2prompt | ingest | |
|---|---|---|
| 13 | 5 | |
| 6,895 | 320 | |
| 3.0% | 4.1% | |
| 9.6 | 5.2 | |
| about 18 hours ago | 27 days ago | |
| Rust | Go | |
| MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
code2prompt
- Revolutionizing LLM Interactions: Code2Prompt – Your Code's New AI Assistant
View the Project on GitHub
- Gemini 2.5 Pro vs. Claude 3.7 Sonnet: Coding Comparison
- All Data and AI Weekly #177 - 17-Feb-2025
🚀 https://github.com/mufeedvh/code2prompt
- Show HN: Transform Your Codebase into a Single Markdown Doc for Feeding into AI
From a rough glance it looks pretty similar to another tool that I've been using https://github.com/mufeedvh/code2prompt
- Exploring AI for Refactoring at Scale
Prompt engineering is another beast. You can check some open source. But I would suggest using your 🧠 to think and explain what you want as a result and what specific context you know about the project. AI can do a lot, but it can not read your mind
- Mufeedvh/code2prompt:A CLI tool to convert your codebase in a single LLM prompt
- Ask HN: How do you load your code base as context window in ChatGPT?
I've read O1 has 200k tokens limit as per context window [1], my code base is about 177k, I could generate a prompt with code2prompt [2] tool and load mi code case as a prompt, which is not the case:
- it doesn't allow me to attach text files when o1 is selected.
- Pasting the whole prompt is the text area freezes the browser for a while but when si done, I can't submitted as the send button is disabled.
- When creating a project I can attach files but then I can't select o1 model.
I'm on the fence to buy pro subscription, if only I could use o1 with my code base as context window.
My guess is that when they say 200k tokens they mean trough the api ? But then I'd incur in extra costs which seems odd since I'm already paying the subscription, and it's not an automation use case.
I would appreciate if anyone can share their experience working with large context window for specific use case of a code base.
Thanks!
- [1] https://platform.openai.com/docs/models#o1
- [2] https://github.com/mufeedvh/code2prompt
- Code2prompt
- Source to Prompt- Turn your code into an LLM prompt, but with more features
- Show HN: Dump entire Git repos into a single file for LLM prompts
--> There's no option to specify files to include, must work backwards with ignore option
- [code2prompt](https://github.com/mufeedvh/code2prompt)
ingest
What are some alternatives?
Awesome-Prompt-Engineering - This repository contains a hand-curated resources for Prompt Engineering with a focus on Generative Pre-trained Transformer (GPT), ChatGPT, PaLM etc
files-to-prompt - Concatenate a directory full of files into a single prompt for use with LLMs
gpt4-cli - a dummy simple cli to run gpt4 llm in the terminal
repopack - 📦 Repomix (formerly Repopack) is a powerful tool that packs your entire repository into a single, AI-friendly file. Perfect for when you need to feed your codebase to Large Language Models (LLMs) or other AI tools like Claude, ChatGPT, and Gemini. [Moved to: https://github.com/yamadashy/repomix]
repomix - 📦 Repomix is a powerful tool that packs your entire repository into a single, AI-friendly file. Perfect for when you need to feed your codebase to Large Language Models (LLMs) or other AI tools like Claude, ChatGPT, DeepSeek, Perplexity, Gemini, Gemma, Llama, Grok, and more.
repo2file - Dump selected files from your repo into single file to easily use in LLMs (Claude, Openai, etc..)