Build Your Own Loom Clone with Next.js and Mux
In this tutorial, you will learn how to build a fully functional screen recording and sharing platform, similar to Loom, using Next.js 15 and Mux. We will cover creating a custom browser-based screen recorder, implementing a direct upload system to Mux, integrating AI-powered transcription and summarization, adding automatic watermarking, and setting up a sharable video page with a Mux player. By the end, you’ll have a production-ready foundation for your own video product.
Understanding Video Processing Challenges and Mux Solutions
Traditionally, handling video online presents several challenges:
- Encoding: Raw video files use various codecs, and not all browsers support every codec. This can lead to playback issues.
- Streaming Format: Large raw video files require users to download the entire file before playback. Professional platforms use HTTP Live Streaming (HLS), which breaks videos into small chunks (2-10 seconds) for near-instant playback and uses an M3U8 playlist to manage chunk delivery.
- Adaptive Bitrate Streaming (ABR): Videos are encoded at multiple quality levels (e.g., 360p, 720p, 1080p). The player automatically adjusts the quality based on the user’s internet speed for smooth playback.
- Storage and CDN Distribution: Video content needs to be stored efficiently and delivered quickly to users globally.
Mux simplifies these complexities. By uploading a raw video file to Mux, it automatically handles transcoding into multiple quality levels, generates HLS streams, stores the content on a global CDN, and provides a simple playback URL. It also offers AI features for transcription and summarization.
App Architecture: Direct Upload
We will use a direct upload architecture, which is more efficient and secure than server relay:
- The user initiates an upload.
- Your server requests a temporary, signed upload URL from Mux.
- Your server sends this URL back to the user’s browser.
- The browser uploads the video file directly to Mux’s servers via their CDN.
This method prevents your server from becoming a bottleneck, reduces bandwidth costs, and enhances security as the video file never passes through your server. The signed URL is temporary, single-use, and contains asset settings like quality and transcription requirements.
Prerequisites
- Node.js installed
- Basic understanding of React and Next.js
- A Mux account (free tier available)
Step 1: Set Up Your Next.js Project
Create a new Next.js project using the following command in your terminal:
npx create-next-app@latestAccept the default options during the setup process. Once the project is created, navigate into the project directory:
cd your-project-nameStep 2: Install Dependencies
Install the necessary Mux packages and other dependencies:
npm install @mux/mux-node @mux/mux-player-react @mux/mux-ai lucide-reactNote: We use @mux/mux-player-react instead of a generic player like Video.js because it’s optimized for Mux’s infrastructure, offering built-in analytics, automatic quality selection, and features like hover preview.
Step 3: Configure Mux Credentials
- Go to mux.com and log in or sign up for an account.
- Navigate to Settings > Access Tokens.
- Click Create new token. Name it (e.g., “recorder”).
- For permissions, select only Mux Video with read and write access. This follows the principle of least privilege.
- Save the token. You will receive a Token ID and a Secret Key. Copy these or download them as an
.envfile. - Create a
.env.localfile in the root of your project. - Add your Mux credentials to the
.env.localfile:
MUX_TOKEN_ID=YOUR_TOKEN_ID
MUX_TOKEN_SECRET=YOUR_TOKEN_SECRETStep 4: Create Server Actions for Mux Integration
Create a file named actions.ts inside the app directory. This file will contain server actions that run on the server.
// app/actions.ts
'use server';import Mux from '@mux/mux-node';
import { auth } from '@clerk/nextjs'; // Example for authentication, replace if not using Clerk
const { Video } = new Mux(
process.env.MUX_TOKEN_ID as string,
process.env.MUX_TOKEN_SECRET as string
);
// Function to create a direct upload URL
export async function createUploadUrl(userId: string) {
const upload = await Video.Uploads.create({
corsOrigin pode ser "*" ou seu dominio em producao
input: {
is_live: false,
playback_policy: ['public'], // 'public' for anyone, or ['signed'] for authenticated access
// For AI features like transcription and summarization, use 'plus' or 'premium' tier
// Ensure your Mux plan supports these tiers if needed.
quality: 'plus',
// Enable MP4 downloads
mp4_support: 'standard',
// Request automatic transcription using OpenAI's Whisper model
transcription: {
language_code: 'en',
},
},
// For production, restrict CORS origin to your domain, e.g., 'https://yourdomain.com'
corsOrigin: '*',
});
return {
uploadId: upload.id,
uploadUrl: upload.url,
};
}
// Function to get asset details after upload
export async function getAssetDetails(uploadId: string) {
try {
const upload = await Video.Uploads.get(uploadId);
if (upload.assetId) {
const asset = await Video.Assets.get(upload.assetId);
return {
playbackId: asset.playback_ids?.[0]?.id,
assetId: asset.id,
status: asset.status,
transcriptionStatus: asset.tracks?.find(t => t.type === 'text' && t.kind === 'subtitles')?.status || 'pending',
};
}
return { status: upload.status, transcriptionStatus: 'pending' };
} catch (error) {
console.error('Error fetching asset details:', error);
return { status: 'errored', transcriptionStatus: 'errored' };
}
}
// Function to list all assets for a user
export async function listUserAssets(userId: string) {
// In a real app, you'd filter assets by userId stored in your database.
// For this example, we list all assets.
try {
const assets = await Video.Assets.list({
limit: 100,
// filter by user ID if stored in metadata
// search: `data.userId:${userId}`
});
return assets.data.map(asset => ({
id: asset.id,
playbackId: asset.playback_ids?.[0]?.id,
status: asset.status,
createdAt: asset.created_at,
duration: asset.duration,
// Ensure metadata is enabled in Mux if you store custom data like userId
// userId: asset.data?.userId
}));
} catch (error) {
console.error('Error listing assets:', error);
return [];
}
}
// Function to get video status and transcript
export async function getVideoStatusAndTranscript(playbackId: string) {
try {
const assets = await Video.Assets.list({ limit: 1 }); // Fetch latest asset
const asset = assets.data.find(a => a.playback_ids?.[0]?.id === playbackId);
if (!asset) {
return { status: 'not_found', transcriptStatus: 'not_found', transcript: null };
}
let transcriptStatus = 'pending';
let transcript = null;
const subtitleTrack = asset.tracks?.find(t => t.type === 'text' && t.kind === 'subtitles');
if (subtitleTrack) {
transcriptStatus = subtitleTrack.status;
if (transcriptStatus === 'ready') {
const vttUrl = `https://stream.mux.com/${playbackId}/${subtitleTrack.id}.vtt`;
const response = await fetch(vttUrl);
const text = await response.text();
// Basic VTT parsing - a more robust parser might be needed
const lines = text.split('nn');
transcript = lines.slice(1).map((line: string) => {
const [time, ...content] = line.split('n');
const [startTime, endTime] = time.split(' --> ');
return { startTime, endTime, text: content.join(' ') };
});
}
}
return {
status: asset.status,
transcriptStatus,
transcript,
playbackId: asset.playback_ids?.[0]?.id,
};
} catch (error) {
console.error('Error fetching video status and transcript:', error);
return { status: 'errored', transcriptStatus: 'errored', transcript: null };
}
}
// Helper to format VTT timecodes (optional, for advanced parsing)
function formatVttTime(time: string): string {
// Implementation for formatting time if needed
return time;
}
Source: Build Your Own Video Sharing App – Loom Clone with Next.js and Mux JavaScript Tutorial (YouTube)