DEV Community

Cover image for An update to my automated silence trimming script
Donald Feury
Donald Feury

Posted on • Edited on

An update to my automated silence trimming script

Follow me on YouTube and Twitter for more video editing tricks with ffmpeg and a little python now!

I recently updated my python script for automatically removing the silence parts of a video.

Previously, I had to call the shell script separate to generate the silence timestamps.

Now, the python script grabs the output of the shell script directly using subprocess run.

Script

#!/usr/bin/env python  import sys import subprocess import os from moviepy.editor import VideoFileClip, concatenate_videoclips input_path = sys.argv[1] out_path = sys.argv[2] threshold = sys.argv[3] duration = sys.argv[4] try: ease = float(sys.argv[5]) except IndexError: ease = 0.2 minimum_duration = 1.0 def generate_timestamps(path, threshold, duration): command = "detect_silence {} {} {}".format(input_path, threshold, duration) output = subprocess.run(command, shell=True, capture_output=True, text=True) return output.stdout.split('\n')[:-1] def main(): count = 0 last = 0 timestamps = generate_timestamps(input_path, threshold, duration) print("Timestamps: {}".format(timestamps)) video = VideoFileClip(input_path) full_duration = video.duration clips = [] for times in timestamps: end,dur = times.strip().split() print("End: {}, Duration: {}".format(end, dur)) to = float(end) - float(dur) + ease start = float(last) clip_duration = float(to) - start # Clips less than one seconds don't seem to work  print("Clip Duration: {} seconds".format(clip_duration)) if clip_duration < minimum_duration: continue if full_duration - to < minimum_duration: continue print("Clip {} (Start: {}, End: {})".format(count, start, to)) clip = video.subclip(start, to) clips.append(clip) last = end count += 1 if not clips: print("No silence detected, exiting...") return if full_duration - float(last) > minimum_duration: print("Clip {} (Start: {}, End: {})".format(count, last, 'EOF')) clips.append(video.subclip(last)) processed_video = concatenate_videoclips(clips) processed_video.write_videofile( out_path, fps=60, preset='ultrafast', codec='libx264', audio_codec='aac' ) video.close() main() 
Enter fullscreen mode Exit fullscreen mode

I won't go over this in full detail, as I did that in the last post about the silence trimming script. I will break down the changes I made.

For a break down of the scripts in more detail, check out the last post I made about it.

Changes

def generate_timestamps(path, threshold, duration): command = "detect_silence {} {} {}".format(input_path, threshold, duration) output = subprocess.run(command, shell=True, capture_output=True, text=True) return output.stdout.split('\n')[:-1] 
Enter fullscreen mode Exit fullscreen mode

Here I created a function to pass in the arguments needed by the detect silence script, and execute it using subprocess.run.

It needs the capture_output=True to actually save the output, and text=True to make the output be in the form of a string, otherwise its returned as raw bytes.

I then split on the newlines and remove the last entry, as its an empty string that is not needed.

Since we are grabbing the script output straight from stdout, we no longer have to open and read an arbitrary text file to get the timestamps.

One last change was, before I was adding a padding to the start of the next clip, to make the transitions less abrupt. Now I add it the end of the last clip, as it seems more natural.

if not clips: print("No silence detected, exiting...") return 
Enter fullscreen mode Exit fullscreen mode

I also added this sanity check to make sure there were actually clips generated, can't concatenate clips that don't exist.

Thats it! Now I can remove the silence parts of a video by calling only script! It also avoids having to create the intermittent timestamp file as well.

Top comments (1)

Collapse
 
xis42 profile image
aproxis

added some comments

#!/usr/bin/env python # Import the necessary libraries and modules import sys import subprocess import os from moviepy.editor import VideoFileClip, concatenate_videoclips # Read the command-line arguments input_path = sys.argv[1] out_path = sys.argv[2] threshold = sys.argv[3] duration = sys.argv[4] # Read the optional fifth command-line argument try: ease = float(sys.argv[5]) # If the fifth argument is not provided, use a default value except IndexError: ease = 0.2 # Set the minimum duration of a clip minimum_duration = 1.0 # Define a function to generate a list of timestamps for silence periods in the input video def generate_timestamps(path, threshold, duration): # Run the `detect_silence` command-line utility command = "detect_silence {} {} {}".format(input_path, threshold, duration) output = subprocess.run(command, shell=True, capture_output=True, text=True) # Split the output into a list of strings, one for each silence period return output.stdout.split('\n')[:-1] # Define the main function def main(): # Initialize some variables count = 0 last = 0 # Get the list of silence timestamps timestamps = generate_timestamps(input_path, threshold, duration) # Print the list of timestamps for debugging print("Timestamps: {}".format(timestamps)) # Load the input video video = VideoFileClip(input_path) # Get the duration of the input video full_duration = video.duration # Initialize an empty list to store the output video clips clips = [] # Iterate over the list of timestamps for times in timestamps: # Split the timestamp string into two parts: the end time and the duration end,dur = times.strip().split() # Print the end time and duration for debugging print("End: {}, Duration: {}".format(end, dur)) # Calculate the start time of the clip based on the end time and duration to = float(end) - float(dur) + ease # Set the start time of the clip to the last end time start = float(last) # Calculate the duration of the clip clip_duration = float(to) - start # Print the clip duration for debugging print("Clip Duration: {} seconds".format(clip_duration)) # If the clip duration is less than the minimum duration, skip this iteration if clip_duration < minimum_duration: continue # If the clip extends past the end of the input video, skip this iteration if full_duration - to < minimum_duration: continue # Print the start and end times of the clip for debugging print("Clip {} (Start: {}, End: {})".format(count, start, # Create a subclip of the input video using the start and end times clip = video.subclip(start, to) # Add the clip to the list of output clips clips.append(clip) # Update the last end time last = end # Increment the clip count count += 1 # If no clips were added to the list, it means no silence periods were detected if not clips: # Print an error message and exit the function print("No silence detected, exiting...") return # If the last clip extends past the end of the input video, add a final clip if full_duration - float(last) > minimum_duration: print("Clip {} (Start: {}, End: {})".format(count, last, 'EOF')) clips.append(video.subclip(last)) # Concatenate the output clips together to create the final video processed_video = concatenate_videoclips(clips) # Save the final video to the specified output path processed_video.write_videofile( out_path, fps=60, preset='ultrafast', codec='libx264', audio_codec='aac' ) # Close the input video video.close() # Call the main function main() 
Enter fullscreen mode Exit fullscreen mode