S3 file upload/download
This template demonstrates a complete implementation of file upload and download functionality using S3-compatible cloud storage with database metadata management. It showcases best practices for handling file operations with signed URLs, drag-and-drop uploads, and paginated file tables.
Overview
This template provides:
- S3-compatible storage - Works with AWS S3, Cloudflare R2, MinIO, and others
- Signed URLs - Secure, temporary URLs for uploads and downloads
- Drag-and-drop uploads - Modern file upload interface
- Database metadata - Track file information in your database
- Paginated file tables - Efficient display of large file collections
- Search functionality - Find files by name or type
- Sheet components - Edit file metadata with shadcn-vue forms
- Progress tracking - Real-time upload progress indicators
Accessing the template
View the working template at /templates/s3-file-storage in your application to see all features in action.
Architecture
The template follows a clean separation between storage operations and metadata management:
app/
├── components/files/
│ ├── FilesTable.vue # Main table component
│ ├── FilesTableActions.vue # Upload button and search
│ ├── UploadFilesForm.vue # Drag-and-drop upload
│ ├── EditFileForm.vue # Edit metadata form
│ └── DeleteFileDialog.vue # Delete confirmation
├── services/
│ ├── files-client-service.ts # File metadata operations
│ └── storage-client-service.ts # S3 upload/download operations
server/
├── api/
│ ├── files/ # Metadata CRUD endpoints
│ │ ├── index.get.ts # List files
│ │ ├── index.post.ts # Create file record
│ │ ├── [id].put.ts # Update file record
│ │ └── [id].delete.ts # Delete file and record
│ └── storage/
│ └── get-url/
│ ├── upload.post.ts # Get signed upload URL
│ └── [key].get.ts # Get signed download URL
└── services/
├── files-server-service.ts # File metadata business logic
└── storage-server-service.ts # S3 client and signed URLs
Environment configuration
Before using file storage, configure these environment variables in your .env file:
# S3-compatible storage
S3_ENDPOINT=https://your-s3-endpoint.com
S3_REGION=auto
S3_ACCESS_KEY=your-access-key
S3_SECRET_KEY=your-secret-key
S3_BUCKET=your-bucket-name
S3 provider examples
AWS S3:
S3_ENDPOINT=https://s3.amazonaws.com
S3_REGION=us-east-1
Cloudflare R2:
S3_ENDPOINT=https://<account-id>.r2.cloudflarestorage.com
S3_REGION=auto
MinIO (self-hosted):
S3_ENDPOINT=https://minio.yourdomain.com
S3_REGION=us-east-1
Key features
Signed URLs for security
The template uses signed URLs instead of exposing S3 credentials to the client:
Upload flow:
- Client requests a signed upload URL from your API
- Server generates a temporary URL with write permissions
- Client uploads directly to S3 using the signed URL
- Client saves file metadata to your database
Download flow:
- Client requests a signed download URL from your API
- Server generates a temporary URL with read permissions
- Client downloads the file using the signed URL
This approach:
- Keeps S3 credentials secure on the server
- Reduces server bandwidth (direct S3 uploads/downloads)
- Provides temporary, expiring access
- Allows fine-grained permission control
Drag-and-drop upload
The upload form uses a modern drag-and-drop interface:
<div
@drop.prevent="handleDrop"
@dragover.prevent
@dragenter.prevent
class="border-2 border-dashed rounded-lg p-8 text-center cursor-pointer"
>
<input
type="file"
multiple
@change="handleFileSelect"
class="hidden"
ref="fileInput"
/>
<p>{{ $t('files.dragFilesHere') }}</p>
</div>
Features:
- Multiple file selection
- Visual drag feedback
- File type validation
- Size limit enforcement
- Progress indicators per file
Upload with progress tracking
const uploadFile = async (file: File) => {
const progress = ref(0)
try {
const { key } = await uploadFileWithSignedUrl(
file,
file.type,
(percent) => {
progress.value = percent
}
)
// Save metadata to database
await createFile({
key,
name: file.name,
size: file.size,
mimeType: file.type,
userId: user.value.id,
})
handleSuccess('File uploaded successfully')
} catch (error) {
handleError(error)
}
}
Database metadata management
The files table schema in prisma/schema.prisma:
model File {
id String @id @default(dbgenerated("gen_random_uuid()")) @db.Uuid
key String @unique
name String
mimeType String
size BigInt
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
userId String @db.Uuid
user User @relation(fields: [userId], references: [id], onDelete: Cascade)
@@map("file")
}
Key points:
keyis the S3 object key (unique identifier in the bucket)sizeuses BigInt for large files- User relationship for access control
- Metadata is searchable and queryable
Storage operations
Upload a file
import { uploadFileWithSignedUrl, generateStorageKey } from '@/services/storage-client-service'
const file = event.target.files[0]
// Generate unique key or use custom key
const key = generateStorageKey(file.name)
// Upload to S3
const { key: uploadedKey } = await uploadFileWithSignedUrl(
file,
file.type,
(progress) => console.log(`Upload progress: ${progress}%`)
)
// Save metadata to database
await createFile({
key: uploadedKey,
name: file.name,
size: file.size,
mimeType: file.type,
userId: currentUserId,
})
Download a file
import { downloadFileWithSignedUrl } from '@/services/storage-client-service'
// Download file (triggers browser download)
await downloadFileWithSignedUrl(file.key, file.name)
The download function:
- Generates a signed URL with
Content-Dispositionheader - Forces browser to download instead of displaying
- Uses the provided filename
Delete a file
import { deleteFile } from '@/services/files-client-service'
// Deletes both S3 object and database record
await deleteFile(fileId)
The server-side delete operation:
- Deletes the file from S3
- Removes the database record
- Both operations wrapped in error handling
Server-side implementation
Generating signed upload URLs
// server/api/storage/get-url/upload.post.ts
export default defineEventHandler(async event => {
const { key, contentType } = await readBody(event)
const url = await getSignedUploadUrl(key, contentType)
return { url }
})
Generating signed download URLs
// server/api/storage/get-url/[key].get.ts
export default defineEventHandler(async event => {
const key = getRouterParam(event, 'key')
const query = getQuery(event)
const filename = query.filename as string | undefined
const url = await getSignedDownloadUrl(
key,
3600, // 1 hour expiry
filename
)
return { url }
})
S3 client configuration
// server/services/storage-server-service.ts
import { S3Client } from '@aws-sdk/client-s3'
const createS3Client = () => {
const config = useRuntimeConfig()
return new S3Client({
region: config.s3Region,
endpoint: config.s3Endpoint,
credentials: {
accessKeyId: config.s3AccessKey,
secretAccessKey: config.s3SecretKey,
},
})
}
Security considerations
Access control
Implement proper access control for file operations:
// Only allow users to access their own files
export default defineEventHandler(async event => {
const { user } = await requireAuth(event)
const fileId = getRouterParam(event, 'id')
const file = await prisma.file.findUnique({
where: { id: fileId },
})
if (!file || file.userId !== user.id) {
throw createError({
statusCode: 403,
message: 'Access denied',
})
}
// Proceed with operation
})
File type validation
Validate file types on both client and server:
const ALLOWED_TYPES = [
'image/jpeg',
'image/png',
'image/gif',
'application/pdf',
]
const validateFileType = (file: File) => {
if (!ALLOWED_TYPES.includes(file.type)) {
throw new Error('File type not allowed')
}
}
File size limits
Enforce size limits to prevent abuse:
const MAX_FILE_SIZE = 10 * 1024 * 1024 // 10MB
const validateFileSize = (file: File) => {
if (file.size > MAX_FILE_SIZE) {
throw new Error('File size exceeds limit')
}
}
Signed URL expiration
Configure appropriate expiration times:
// Short expiry for uploads (5 minutes)
const uploadUrl = await getSignedUploadUrl(key, contentType, 300)
// Longer expiry for downloads (1 hour)
const downloadUrl = await getSignedDownloadUrl(key, 3600)
Content-Type validation
Ensure uploaded content matches declared type:
// Server-side after upload
const response = await s3Client.send(
new HeadObjectCommand({ Bucket: bucket, Key: key })
)
if (response.ContentType !== expectedContentType) {
// Delete the file and throw error
throw new Error('Content type mismatch')
}
Adapting the template
Supporting different file types
Add file type categories:
const getFileCategory = (mimeType: string) => {
if (mimeType.startsWith('image/')) return 'image'
if (mimeType.startsWith('video/')) return 'video'
if (mimeType.startsWith('audio/')) return 'audio'
if (mimeType === 'application/pdf') return 'document'
return 'other'
}
Adding file previews
For images, generate thumbnails:
// After upload, generate thumbnail
const thumbnail = await generateThumbnail(file)
const { key: thumbnailKey } = await uploadFileWithSignedUrl(
thumbnail,
'image/jpeg'
)
// Save thumbnail key with file metadata
await updateFile(fileId, { thumbnailKey })
Organizing files in folders
Use S3 key prefixes as folders:
const generateFolderKey = (userId: string, folder: string, filename: string) => {
return `users/${userId}/${folder}/${generateId24()}-${filename}`
}
Public vs. private files
Add visibility control:
model File {
// ... other fields
isPublic Boolean @default(false)
}
For public files, use public buckets or CDN URLs instead of signed URLs.
Batch uploads
Handle multiple files efficiently:
const uploadFiles = async (files: File[]) => {
const results = await Promise.allSettled(
files.map(file => uploadFile(file))
)
const successful = results.filter(r => r.status === 'fulfilled').length
const failed = results.filter(r => r.status === 'rejected').length
handleSuccess(`${successful} files uploaded successfully`)
if (failed > 0) {
handleError(`${failed} files failed to upload`)
}
}
Performance optimization
Direct uploads
The template uses direct-to-S3 uploads to:
- Reduce server bandwidth
- Improve upload speed
- Scale better under load
Lazy loading
Load file lists with pagination:
const paginationComposable = usePagination({
initialLimit: 20,
})
CDN integration
For public files, use a CDN:
const getCDNUrl = (key: string) => {
return `https://cdn.yourdomain.com/${key}`
}
Multipart uploads
For large files, implement multipart uploads:
// AWS SDK supports multipart uploads automatically for files > 5MB
import { Upload } from '@aws-sdk/lib-storage'
const upload = new Upload({
client: s3Client,
params: {
Bucket: bucket,
Key: key,
Body: file,
ContentType: file.type,
},
})
upload.on('httpUploadProgress', (progress) => {
const percentage = (progress.loaded / progress.total) * 100
onProgress(percentage)
})
await upload.done()
Common customizations
Adding file sharing
Generate shareable links with expiry:
const generateShareLink = async (fileId: string, expiresIn: number = 86400) => {
const file = await prisma.file.findUnique({ where: { id: fileId } })
const shareToken = generateSecureToken()
await prisma.shareLink.create({
data: {
token: shareToken,
fileId,
expiresAt: new Date(Date.now() + expiresIn * 1000),
},
})
return `${baseUrl}/share/${shareToken}`
}
Image processing
Resize and optimize images on upload:
import sharp from 'sharp'
const processImage = async (file: File) => {
const buffer = await file.arrayBuffer()
const processed = await sharp(Buffer.from(buffer))
.resize(1920, 1080, { fit: 'inside' })
.jpeg({ quality: 85 })
.toBuffer()
return new Blob([processed], { type: 'image/jpeg' })
}
Virus scanning
Integrate with virus scanning services:
// After upload, scan file
const scanResult = await scanFile(key)
if (!scanResult.clean) {
// Delete the file
await deleteFileFromS3(key)
await deleteFile(fileId)
throw new Error('File contains malicious content')
}
Troubleshooting
CORS configuration
Ensure your S3 bucket allows CORS for direct uploads:
[
{
"AllowedOrigins": ["https://yourdomain.com"],
"AllowedMethods": ["PUT", "GET"],
"AllowedHeaders": ["*"],
"MaxAgeSeconds": 3600
}
]
Upload failures
Common issues:
- CORS errors: Configure bucket CORS policy
- Size limits: Check both client and S3 bucket limits
- Permissions: Verify S3 credentials have required permissions
- Signed URL expiry: Ensure URLs are used before expiration
Performance issues
- Use multipart uploads for large files
- Implement retry logic for failed uploads
- Consider compression before upload
- Use CDN for downloads