freefilestoprompt.app
Target & budget
0 tokens used 0 token budget
0%
Files No files yet
Drop files or a folder here
or click to browse files · browse folder · max 200 files / 50 MB total
No files added yet. Drop files above or paste text.
Output 0 tokens · 0 files
Format:
No files added yet — drop files above to generate output.
0 tokens · 0 chars 0 of 0 files included
🔒 Git repo ingestion + saved bundles
Paste a public GitHub URL, select files, pack into a prompt — same UX as drag-drop but for live repos. Save named bundles for repeated workflows.
Unlock with freesuite.app · $2.88/mo
by freesuite.app

Files to Prompt — Pack Multiple Files into One LLM Prompt That Fits the Context Window

freefilestoprompt.app is a free, browser-based packer that takes multiple files and turns them into a single LLM prompt that fits your target model's context window. You drag files in (or paste text), the tool counts tokens per file, you pick a target model from a dropdown of 14 popular options (Claude Opus 4.7 with 1M tokens, GPT-5 with 400K, Llama 4 Scout with 10M, plus a custom option), and the auto-fit feature drops lowest-priority files greedily until everything fits the budget. Output is a single concatenated prompt with file delimiters in your choice of format — XML-ish, Markdown, or plain — plus an optional directory tree at the top so the model knows the file layout.

The whole thing runs client-side. Files are read with the browser's native FileReader API; their contents never leave your device. There is no upload endpoint, no Freesuite server in the request path, no third-party SDKs, no analytics on file content. Verify by inspecting the Network tab while dropping files.

How does freefilestoprompt.app work?

Drop files into the drop zone or paste text into the area below it. Each file becomes a row showing its name, size, token count, and priority dropdown (high / medium / low). Pin files that must stay regardless of budget. Click Drop on files you want to exclude. Pick a target model — the budget bar fills as you add files. When you click Auto-fit, freefilestoprompt.app first removes dropped files, then keeps pinned files, then greedily includes the rest in priority order until adding the next file would exceed your budget. Excluded files stay visible in the list but are dimmed; you can adjust priorities and re-fit.

Why pack files into a single prompt?

Most LLM tasks involving multiple files — code review, documentation Q&A, repo analysis, multi-doc summarization — work best when you give the model all the context at once rather than splitting across multiple turns. Modern context windows are huge (Claude Opus 4.7 and Gemini 2.5 Pro both handle 1M tokens, Llama 4 Scout up to 10M) so packing many files is now practical. The challenge is fitting them under the cap. freefilestoprompt.app handles the budgeting math, the file delimiter formatting, and the output assembly so you can paste one block into your model and go.

Output formats

freefilestoprompt.app supports three output formats. XML wraps each file in <file path="...">CONTENT</file> tags — this is what Anthropic recommends for Claude and works cleanly for GPT, Gemini, and most other providers. Markdown uses ### File: path headers with fenced code blocks — useful when you want the LLM to render the output back as Markdown. Plain uses === FILE: path === separators with no formatting — simplest, but harder for the model to parse multi-file structure.

Token counting accuracy

freefilestoprompt.app uses a calibrated heuristic: ASCII text averages 4 characters per token, CJK (Chinese, Japanese, Korean) averages 1.5 characters per token, and other Unicode (Arabic, Cyrillic, etc.) averages 2.5 characters per token. The estimator errs slightly conservative so packed prompts reliably fit under the model's context window. For exact per-model token counts before sending the packed output, use freetokencounter.app on the result.

Why use freefilestoprompt.app?

100% client-side

Files read in browser; nothing uploaded. Safe for proprietary code.

14 target models

Claude Opus 4.7, GPT-5, Gemini 2.5 Pro, Llama 4 Scout (10M ctx), and more.

Priorities + auto-fit

Pin must-keep files, drop irrelevant ones, auto-fit fills the budget.

Three output formats

XML (Anthropic-recommended), Markdown, or plain delimiters.

Frequently Asked Questions

What is freefilestoprompt.app?

freefilestoprompt.app is a free, browser-based tool that takes multiple files (text, code, markdown, JSON, etc.) and packs them into one LLM prompt that fits your target model's context window. You drag files in, the tool counts tokens per file, you pick a target model, and the auto-fit feature drops lowest-priority files until everything fits the budget. Output is a single concatenated prompt with file delimiters in your choice of format (XML, Markdown, or plain).

Why pack files into a single prompt?

Most LLM tasks involving multiple files (code review, documentation Q&A, repo analysis, multi-doc summarization) work best when you give the model all the context at once rather than splitting across multiple turns. Modern context windows are huge (Claude Opus 4.7 and Gemini 2.5 Pro both handle 1M tokens, Llama 4 Scout up to 10M) so packing many files is now practical. The challenge is fitting them under the cap — that is what freefilestoprompt.app does.

How does freefilestoprompt.app count tokens?

freefilestoprompt.app uses a calibrated heuristic: ASCII text averages 4 characters per token, CJK (Chinese, Japanese, Korean) averages 1.5 characters per token, and other Unicode (Arabic, Cyrillic, etc.) averages 2.5 characters per token. The estimator errs slightly on the conservative side so packed prompts reliably fit under the model's context window. For exact per-model token counts before sending, use freetokencounter.app on the packed output.

Are my files uploaded anywhere?

No. freefilestoprompt.app is a static page. Files are read directly in your browser using the FileReader API; their contents never leave your device. There is no upload endpoint, no Freesuite server in the request path, no third-party SDKs, no analytics on file content. You can verify by inspecting the Network tab while dropping files.

What output format should I use?

XML format wraps each file in <file path="...">CONTENT</file> tags. This is what Anthropic recommends for Claude and works cleanly for GPT, Gemini, and most other providers. Markdown format uses ### File: path headers with fenced code blocks. Plain format uses === FILE: path === separators with no formatting. XML is the safest default for multi-file LLM workflows; Markdown is useful when you want the LLM to render the output back as Markdown.

How does priority and auto-fit work?

Each file gets a priority (high, medium, low) and optional pin/drop flags. When you click Auto-fit, freefilestoprompt.app first removes files marked Drop, then pinned files always stay. Among the rest, it greedily includes files in priority order (highs first, then mediums, then lows) until adding the next file would exceed your budget. Excluded files are visually marked but kept in the list so you can adjust priorities and re-fit.

What file types does it support?

Any text-based file: .txt, .md, .js, .ts, .py, .java, .go, .rb, .rs, .c, .cpp, .h, .css, .html, .json, .yaml, .toml, .xml, .csv, .log, source files in any language, plus any extensionless text. Binary files (images, PDFs, archives, executables) are detected and skipped with a warning. Maximum 200 files and 50 MB total combined per session — beyond that the browser starts to slow down.

How is this different from gitingest or repomix?

gitingest and repomix are excellent for fetching public GitHub repos and converting them to LLM-ready prompts. freefilestoprompt.app is browser-only, free, no sign-up, no CLI install, and works on any local files (including private code that should never touch a third-party server). Trade-off: you have to drag files in manually rather than pasting a repo URL — which is also the privacy advantage. Live repo ingestion is on the roadmap as a Pro feature.