Cloudflare R2
S3-compatible object storage with zero egress fees. Ideal for storing large files (images, videos, backups) without the costly data transfer bills typical of AWS S3. Accessible via the S3 API.
Why Cloudflare R2?
Storing large files where AWS S3 egress costs are prohibitive
Serving user-uploaded assets through a CDN-like global network
Any S3-compatible workflow you want to migrate away from AWS
Signal Breakdown
What drives the Trust Score
Download Trend
Last 12 months
Tradeoffs & Caveats
Know before you commitNeed advanced S3 features like Cross-Region Replication — R2 is simpler
Already on AWS and using other AWS services tightly — S3 integration is seamless there
Need real-time event notifications — R2's event system is more limited than S3
Pricing
Free tier & paid plans
10GB storage/mo · 1M Class A ops/mo
$0.015/GB-mo storage · $0.36/M Class A ops
Zero egress fees — major advantage over S3
Alternative Tools
Other options worth considering
Often Used Together
Complementary tools that pair well with Cloudflare R2
Learning Resources
Docs, videos, tutorials, and courses
Get Started
Repository and installation options
View on GitHub
github.com/cloudflare/cloudflare-docs
npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presignerQuick Start
Copy and adapt to get going fast
import { S3Client, PutObjectCommand, GetObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
const R2 = new S3Client({
region: 'auto',
endpoint: `https://${process.env.CF_ACCOUNT_ID}.r2.cloudflarestorage.com`,
credentials: {
accessKeyId: process.env.R2_ACCESS_KEY_ID!,
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY!,
},
});
// Generate presigned upload URL
const url = await getSignedUrl(
R2,
new PutObjectCommand({ Bucket: 'assets', Key: 'user-upload.jpg' }),
{ expiresIn: 3600 }
);
console.log('Presigned URL:', url);Code Examples
Common usage patterns
Presigned upload URL for browser
Generate a presigned URL for direct browser-to-R2 uploads
// app/api/upload-url/route.ts
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
const R2 = new S3Client({
region: 'auto',
endpoint: `https://${process.env.CF_ACCOUNT_ID}.r2.cloudflarestorage.com`,
credentials: {
accessKeyId: process.env.R2_ACCESS_KEY_ID!,
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY!,
},
});
export async function POST(req: Request) {
const { filename, contentType } = await req.json();
const key = `uploads/${Date.now()}-${filename}`;
const url = await getSignedUrl(
R2,
new PutObjectCommand({ Bucket: 'my-bucket', Key: key, ContentType: contentType }),
{ expiresIn: 300 }
);
return Response.json({ url, key });
}List and delete objects
List all files in a bucket and delete old ones
import { S3Client, ListObjectsV2Command, DeleteObjectsCommand } from '@aws-sdk/client-s3';
const R2 = new S3Client({ /* ...config */ });
const { Contents } = await R2.send(new ListObjectsV2Command({ Bucket: 'my-bucket', Prefix: 'temp/' }));
const cutoff = Date.now() - 24 * 60 * 60 * 1000; // 24 hours ago
const toDelete = (Contents ?? []).filter((obj) => obj.LastModified!.getTime() < cutoff);
if (toDelete.length > 0) {
await R2.send(new DeleteObjectsCommand({
Bucket: 'my-bucket',
Delete: { Objects: toDelete.map(({ Key }) => ({ Key: Key! })) },
}));
console.log(`Deleted ${toDelete.length} old files`);
}Community Notes
Real experiences from developers who've used this tool