freefilestoprompt.app is a free, browser-based packer that takes multiple files and turns them into a single LLM prompt that fits your target model's context window. You drag files in (or paste text), the tool counts tokens per file, you pick a target model from a dropdown of 14 popular options (Claude Opus 4.7 with 1M tokens, GPT-5 with 400K, Llama 4 Scout with 10M, plus a custom option), and the auto-fit feature drops lowest-priority files greedily until everything fits the budget. Output is a single concatenated prompt with file delimiters in your choice of format — XML-ish, Markdown, or plain — plus an optional directory tree at the top so the model knows the file layout.
The whole thing runs client-side. Files are read with the browser's native FileReader API; their contents never leave your device. There is no upload endpoint, no Freesuite server in the request path, no third-party SDKs, no analytics on file content. Verify by inspecting the Network tab while dropping files.
Drop files into the drop zone or paste text into the area below it. Each file becomes a row showing its name, size, token count, and priority dropdown (high / medium / low). Pin files that must stay regardless of budget. Click Drop on files you want to exclude. Pick a target model — the budget bar fills as you add files. When you click Auto-fit, freefilestoprompt.app first removes dropped files, then keeps pinned files, then greedily includes the rest in priority order until adding the next file would exceed your budget. Excluded files stay visible in the list but are dimmed; you can adjust priorities and re-fit.
Most LLM tasks involving multiple files — code review, documentation Q&A, repo analysis, multi-doc summarization — work best when you give the model all the context at once rather than splitting across multiple turns. Modern context windows are huge (Claude Opus 4.7 and Gemini 2.5 Pro both handle 1M tokens, Llama 4 Scout up to 10M) so packing many files is now practical. The challenge is fitting them under the cap. freefilestoprompt.app handles the budgeting math, the file delimiter formatting, and the output assembly so you can paste one block into your model and go.
freefilestoprompt.app supports three output formats. XML wraps each file in <file path="...">CONTENT</file> tags — this is what Anthropic recommends for Claude and works cleanly for GPT, Gemini, and most other providers. Markdown uses ### File: path headers with fenced code blocks — useful when you want the LLM to render the output back as Markdown. Plain uses === FILE: path === separators with no formatting — simplest, but harder for the model to parse multi-file structure.
freefilestoprompt.app uses a calibrated heuristic: ASCII text averages 4 characters per token, CJK (Chinese, Japanese, Korean) averages 1.5 characters per token, and other Unicode (Arabic, Cyrillic, etc.) averages 2.5 characters per token. The estimator errs slightly conservative so packed prompts reliably fit under the model's context window. For exact per-model token counts before sending the packed output, use freetokencounter.app on the result.
Files read in browser; nothing uploaded. Safe for proprietary code.
Claude Opus 4.7, GPT-5, Gemini 2.5 Pro, Llama 4 Scout (10M ctx), and more.
Pin must-keep files, drop irrelevant ones, auto-fit fills the budget.
XML (Anthropic-recommended), Markdown, or plain delimiters.
freefilestoprompt.app is a free, browser-based tool that takes multiple files (text, code, markdown, JSON, etc.) and packs them into one LLM prompt that fits your target model's context window. You drag files in, the tool counts tokens per file, you pick a target model, and the auto-fit feature drops lowest-priority files until everything fits the budget. Output is a single concatenated prompt with file delimiters in your choice of format (XML, Markdown, or plain).
Most LLM tasks involving multiple files (code review, documentation Q&A, repo analysis, multi-doc summarization) work best when you give the model all the context at once rather than splitting across multiple turns. Modern context windows are huge (Claude Opus 4.7 and Gemini 2.5 Pro both handle 1M tokens, Llama 4 Scout up to 10M) so packing many files is now practical. The challenge is fitting them under the cap — that is what freefilestoprompt.app does.
freefilestoprompt.app uses a calibrated heuristic: ASCII text averages 4 characters per token, CJK (Chinese, Japanese, Korean) averages 1.5 characters per token, and other Unicode (Arabic, Cyrillic, etc.) averages 2.5 characters per token. The estimator errs slightly on the conservative side so packed prompts reliably fit under the model's context window. For exact per-model token counts before sending, use freetokencounter.app on the packed output.
No. freefilestoprompt.app is a static page. Files are read directly in your browser using the FileReader API; their contents never leave your device. There is no upload endpoint, no Freesuite server in the request path, no third-party SDKs, no analytics on file content. You can verify by inspecting the Network tab while dropping files.
XML format wraps each file in <file path="...">CONTENT</file> tags. This is what Anthropic recommends for Claude and works cleanly for GPT, Gemini, and most other providers. Markdown format uses ### File: path headers with fenced code blocks. Plain format uses === FILE: path === separators with no formatting. XML is the safest default for multi-file LLM workflows; Markdown is useful when you want the LLM to render the output back as Markdown.
Each file gets a priority (high, medium, low) and optional pin/drop flags. When you click Auto-fit, freefilestoprompt.app first removes files marked Drop, then pinned files always stay. Among the rest, it greedily includes files in priority order (highs first, then mediums, then lows) until adding the next file would exceed your budget. Excluded files are visually marked but kept in the list so you can adjust priorities and re-fit.
Any text-based file: .txt, .md, .js, .ts, .py, .java, .go, .rb, .rs, .c, .cpp, .h, .css, .html, .json, .yaml, .toml, .xml, .csv, .log, source files in any language, plus any extensionless text. Binary files (images, PDFs, archives, executables) are detected and skipped with a warning. Maximum 200 files and 50 MB total combined per session — beyond that the browser starts to slow down.
gitingest and repomix are excellent for fetching public GitHub repos and converting them to LLM-ready prompts. freefilestoprompt.app is browser-only, free, no sign-up, no CLI install, and works on any local files (including private code that should never touch a third-party server). Trade-off: you have to drag files in manually rather than pasting a repo URL — which is also the privacy advantage. Live repo ingestion is on the roadmap as a Pro feature.