Video & Audio Transcoding Using AWS Elemental MediaConvert + Lambda - Node.js + S3 Part 1
In this article, we are going to setup a video transcoder (change the configuration for the audio transcoder) with the AWS Elemental MediaConvert service. We are converting mp4 video into HLS format (which allows quality adaptation based on end-user network conditions).
Workflow
User will be uploading the .mp4 video into an input S3 bucket.
The uploaded video will get transcoded and save in an output S3 bucket.
Update the transcoding status using a web hook.
Architecture
Setup MediaConvert Role & Job Template
By default, MediaConvert doesn't have any IAM role to create the job; we have to manually create the IAM role. For that, go to AWS Elemental MediaConvert -> Jobs -> Create Job & click the AWS integration option in Job settings (left side of the dashboard).
Now you can create a job template from the AWS dashboard itself. For inputs, go with default configurations. For output groups, click on the Add button and choose Apple HLS from the options. We will be transcoding the input video into four bitrates (you can have different bitrates based on your requirements) mainly.720p (HD)
Video Bitrate: 1,500,000 bps - 4,000,000 bps
- Audio Bitrate: 128,000 bps - 192,000 bps
480p (SD)
Video Bitrate: 700,000 bps - 1,500,000 bps
Audio Bitrate: 128,000 bps
360p
Video Bitrate: 400,000 bps - 1,000,000 bps
Audio Bitrate: 96,000 bps - 128,000 bps
240p
Video Bitrate: 300,000 bps - 500,000 bps
Audio Bitrate: 64,000 bps - 96,000 bps
In the output settings of each output we will update the Max Bitrate(bits/sec) field in the video encoding settings and Bitrate (kbits/sec) field in audio encoding settings
After completing all of the outputs, we are now finally able to get our job template JSON, which we will use to trigger the MediaConvert jobs from Lambda. There is a Show job JSON option available in the left side bottom of the dashboard, which will open a dialog with JSON in it, and we can copy that.
Setup Lambda to trigger MediaConvert Job
Create a Lambda function to trigger the MediaConvert Job with the template we have copied, and we have moved the MediaConvert Job Role ARN and output folder path to the env file. We can add metadata in the job template, which we will be getting back in the status update event.
import {
MediaConvertClient,
CreateJobCommand,
} from "@aws-sdk/client-mediaconvert";
const client = new MediaConvertClient({ region: process.env.REGION });
export const handler = async (event) => {
try {
const { Records } = event;
for (const record of Records) {
const outputFolder = process.env.OUTPUT_DEV_FOLDER;
const bucket = record.s3.bucket.name;
const fileName = decodeURIComponent(
record.s3.object.key.replace(/\+/g, " ")
);
const jobTemplate = {
Role: process.env.ROLE, // required
CustomName: "mediaconvert--test-job-template",
Settings: {
TimecodeConfig: {
Source: "ZEROBASED",
},
OutputGroups: [
{
CustomName: "HLS Transcoding Template Group",
Name: "Apple HLS",
Outputs: [
{
VideoDescription: {
CodecSettings: {
Codec: "H_264",
H264Settings: {
RateControlMode: "QVBR",
SceneChangeDetect:
"TRANSITION_DETECTION",
MaxBitrate: 4000000,
},
},
},
AudioDescriptions: [
{
CodecSettings: {
Codec: "AAC",
AacSettings: {
Bitrate: 128000,
CodingMode:
"CODING_MODE_2_0",
SampleRate: 48000,
},
},
AudioSourceName: "Audio Selector 1",
},
],
OutputSettings: {
HlsSettings: {},
},
ContainerSettings: {
Container: "M3U8",
M3u8Settings: {},
},
NameModifier: "720p",
},
{
VideoDescription: {
CodecSettings: {
Codec: "H_264",
H264Settings: {
RateControlMode: "QVBR",
SceneChangeDetect:
"TRANSITION_DETECTION",
MaxBitrate: 1500000,
},
},
},
AudioDescriptions: [
{
CodecSettings: {
Codec: "AAC",
AacSettings: {
Bitrate: 128000,
CodingMode:
"CODING_MODE_2_0",
SampleRate: 48000,
},
},
AudioSourceName: "Audio Selector 1",
},
],
OutputSettings: {
HlsSettings: {},
},
ContainerSettings: {
Container: "M3U8",
M3u8Settings: {},
},
NameModifier: "480p",
},
{
VideoDescription: {
CodecSettings: {
Codec: "H_264",
H264Settings: {
RateControlMode: "QVBR",
SceneChangeDetect:
"TRANSITION_DETECTION",
MaxBitrate: 1000000,
},
},
},
AudioDescriptions: [
{
CodecSettings: {
Codec: "AAC",
AacSettings: {
Bitrate: 96000,
CodingMode:
"CODING_MODE_2_0",
SampleRate: 48000,
},
},
AudioSourceName: "Audio Selector 1",
},
],
OutputSettings: {
HlsSettings: {},
},
ContainerSettings: {
Container: "M3U8",
M3u8Settings: {},
},
NameModifier: "360p",
},
{
VideoDescription: {
CodecSettings: {
Codec: "H_264",
H264Settings: {
RateControlMode: "QVBR",
SceneChangeDetect:
"TRANSITION_DETECTION",
MaxBitrate: 500000,
},
},
},
AudioDescriptions: [
{
CodecSettings: {
Codec: "AAC",
AacSettings: {
Bitrate: 64000,
CodingMode:
"CODING_MODE_2_0",
SampleRate: 48000,
},
},
AudioSourceName: "Audio Selector 1",
},
],
OutputSettings: {
HlsSettings: {},
},
ContainerSettings: {
Container: "M3U8",
M3u8Settings: {},
},
NameModifier: "240p",
},
],
OutputGroupSettings: {
Type: "HLS_GROUP_SETTINGS",
HlsGroupSettings: {
SegmentLength: 10,
Destination: `s3://${process.env.OUTPUT_BUCKET}/${outputFolder}/${fileName}/`,
DestinationSettings: {
S3Settings: {
AccessControl: {
CannedAcl: "PUBLIC_READ",
},
StorageClass: "STANDARD",
},
},
MinSegmentLength: 0,
},
},
},
],
Inputs: [
{
AudioSelectors: {
"Audio Selector 1": {
DefaultSelection: "DEFAULT",
},
},
VideoSelector: {
Rotate: "AUTO",
},
TimecodeSource: "ZEROBASED",
FileInput: `s3://${bucket}/${fileName}`,
},
],
},
AccelerationSettings: {
Mode: "DISABLED",
},
StatusUpdateInterval: "SECONDS_60",
Priority: 0,
HopDestinations: [],
UserMetadata: {
key: `${outputFolder}/${fileName}.m3u8`,
sizeInBytes: record.s3.object.size,
},
};
const command = new CreateJobCommand(jobTemplate);
const data = await client.send(command);
console.log("MediaConvert video job created:", data);
}
} catch (error) {
console.error("Error:", error);
throw error;
}
};
Setup S3 event trigger to invoke Lambda
Now we will setup a S3 event trigger while uploading a video into the input bucket. For that, we will go to the dashboard of the Lambda we have created and click on the Add Trigger button. Select the event source as S3 and add your input S3 bucket with event types, prefixes, and suffixes you need.
Sample S3 Event JSON
{
"Records": [
{
"eventVersion": "2.0",
"eventTime": "2017-08-08T00:19:56.995Z",
"requestParameters": {
"sourceIPAddress": "54.240.197.233"
},
"s3": {
"configurationId": "90bf2f16-1bdf-4de8-bc24-b4bb5cffd5b2",
"object": {
"eTag": "2fb17542d1a80a7cf3f7643da90cc6f4-18",
"key": "vodconsole/TRAILER.mp4",
"sequencer": "005989030743D59111",
"size": 143005084
},
"bucket": {
"ownerIdentity": {
"principalId": ""
},
"name": "input-bucket-name",
"arn": "arn:aws:s3:::xxxxxxx-us-west-2"
},
"s3SchemaVersion": "1.0"
},
"responseElements": {
"x-amz-id-2": "K5eJLBzGn/9NDdPu6u3c9NcwGKNklZyY5ArO9QmGa/t6VH2HfUHHhPuwz2zH1Lz4",
"x-amz-request-id": "E68D073BC46031E2"
},
"awsRegion": "us-west-2",
"eventName": "ObjectCreated:CompleteMultipartUpload",
"userIdentity": {
"principalId": ""
},
"eventSource": "aws:s3"
}
]
}
Conclusion
Finally, we can test this by uploading a video into the input S3 bucket. Which will trigger the Lambda and create the MediaConvert Job, and we will get the transcoded video in our output S3 bucket.
In the next part, we will implement an event trigger after the MediaConvert job is completed to get the status of the transcoding and call our backend webhook.