Isolating Instruments in Songs – Best Tools up to Date (2024)

Isolating individual instruments can be tricky, depending on the song. You’re never going to be able to manually perfectly isolate a single instrument without losing some sound quality of that instrument.

Splitting a song into distinct vocals and instruments has always been a headache for producers, DJs, and anyone who wants to play around with isolated audio. There are many ways to do it, but the process can be time-consuming and often imperfect. Open-source music separation tools make this tricky task faster and easier.

From least to most effective:

  • EQ and phase cancellation
  • Getting midi information used in the song and assigning the part we want to an appropriate synth
  • Using the AI music separation tools
  • Getting the stem files for the song

With EQ and phase cancellation, you can isolate bass pretty well using a low pass filter to cut the top end off and phase canceling out the side channels to only give you the center image. However, this will only give you the low end of the bass, not the higher harmonics that we’ve had to filter out.

Besides stem files we could work with for years, today, we have AI audio-separation tools that represent the easiest and very effective way to isolate specific instruments or vocals from the music.

In this article, we will concentrate on the easiest and very effective way to isolate specific instruments from music; AI audio-separation tools. But don’t worry, we’re also going to explain stems and show you the best ways to get them.

Best Current Tools to Isolate Instruments from Music

Machine learning has made incredible progress in allowing users to edit individual notes in a recorded track that includes, for example, multiple voices or guitar-playing multiple notes. It’s mind-blowing, but even AI has significant limitations. There are lots of promising research suggesting that there will be fewer and fewer of those limitations.

We compared various AI online tools for isolating music:

Instrumental (Beats)

Acapella (Vocals)



Guitar Fair Good Poor Fair Fair
PhonicMind Good Good Excellent Poor Good
Xtrax Stems Good Excellent Fair Fair Good
Virtual DJ Fair Good Poor Poor Fair Excellent Excellent Good Good Good
Demucs Fair Fair Fair Fair Good Fair Fair Good Poor Poor
RX 7 Good Fair Good Poor Good Good Good Good Good

*Disclaimer: For Spleeter, we worked with SpleeterGUI 2.5, full bandwidth (high quality) turned on. For Demucs, we did default. For VDJ also.

In October 2023, introduced ‘Orion’, their latest AI breakthrough in audio separation that’s poised to revolutionize conversion rates. Orion features an innovative method of ‘hallucinating’ vocals directly into the mix, resulting in cleaner and more precise vocal stem extraction, and even the ability to recreate vocal stems when necessary.

Key features of Orion include:

  • Significantly reduced phasing effects and distortion.
  • A remarkable 70% increase in cleanliness by minimizing artifacts.
  • Preservation of all vocal stem tonal qualities, echo, and reverb.
  • Precise extraction of unison singing.
  • Surpassing its predecessor, ‘Phoenix’, Orion offers improved audio quality and delivers split results twice as fast. A substantial 2.5dB increase in SDR underscores its superior separation quality.

Go to and use coupon startingtodj15 to get 15% off on all products.

Xtrax Stems: Most natural instrumental, but bleeds. Great beats isolation, minimal bleeding. One of the most natural acapella. 

Xtrax would be our first choice for extracting guitar from a song. 

PhonicMind: This one seems to hold the vocals musical, like adding harmony back behind it. This can be marvelous. Massive ducking on the beats, though.

PhonicMind is the second-best option for isolating drums in a song.

RX 7: Not much bleeding in beats, though slight artifact, like sounding through a barrel. Solid overall.

I’m not too fond of most iZotope plugins, but RX7 is a crazy tool. It is indeed one of the best source separating tools on the market so far.

I’ve compared it to many other tools, and I think the biggest reason RX7 is better is that it’s built to separate acapella, drums, and bass. It can easily isolate a kick drum from the sub-bass, which makes separating drums so much better.

Recommended:  How to Throw a Rave (In the Woods): The Ultimate Checklist

I’d suggest using the default algorithm “channel-independent.” It’s faster and sounds the best in most cases since the other algorithms often sound like a phaser effect. There’s some artifact happening, like a flanger/underwater sound, but other than that, it’s all good.

Isolating instruments from music is now possible using AI, and Splitter is based on Deezer’s open-source research project Spleeter to accomplish this.

Currently, a few different models are available;two-stem and five-stem models.

The five-stem model can extract vocals, drums, piano, bass, and others (guitar, synths, etc.), while the two-stem model extracts the instrumental and the vocals. In the future, we’ll hopefully see more models.

Demucs: There’s some artifact happening, like a poorly received radio channel and more bleeding. Furthermore, the instrumental stays intact and sounds way more natural than the other two. Similar to but has cleaner acapella.

Moises app isolates and masters tracks to create remixes, samples, and mashups in different formats using machine learning. It uses Spleeter – a state-of-the-art audio separation algorithm.

Using the Moises AI Platform on Android or iPhone, you can play with the instrumental track for karaoke sing-alongs, and by excluding just one track, you can generate a “minus one” song to help you play along to the songs with drums, guitars, bass, etc.

It works with Spotify, and it’s also helpful for active listening, learning, transcriptions, and more.

You won’t regret using Moises for extracting a bass from a song. There’s some bleeding on both instrumental + vocals. Clean beats + clean vocals – my favorite so far! The acapella part is clean (with no claps). However, there is some accent on the swish. 

Artificial Intelligence was used earlier as an answer for the complex job of vocal track isolation, but is the first service to exceed even Spleeter by Deezer and PhonicMind in approachability and degree of results.

This music isolation AI is so impressive! It did a better job than the other ones that cut a lot of frequencies and had metallic artifacts.’s outputs feel like natural isolation because it’s so clean.

Although is the best option for extracting background music from a song by our comparison, there are a lot of others we didn’t test. Just take a look at There are over 80 tools out there!

The website and Magix Acid’s AI implementation are the first that come to my mind. They also use AI (Spleeter) to split instrumentals for any tune, free. Be sure to check them out if you are an amateur music producer/beat-maker/beat isolator.

How Do Music Source Separation AI Tools Work?

So the backend is mainly just Spleeter, with some of them using Demucs or Vocal-Remover open-source music AI library. A machine learning model is trained to detect the patterns within a mastered audio file and the fundamental tracks it’s built from (vocals + instrumentals) on TBs of data.

Practically, a machine “learns” what various instruments sound like and can then recognize the separate parts in the full song’s spectrum and then cut out those parts separately. It runs great with clear, well-defined, loud instruments in simple tunes, not so much with things buried in a complex mix (where even a human would have trouble extracting them out).

The more data you have, the more trustworthy the model is. Also, the model is expected to perform best on songs similar to the ones given to it in training.

What’s Spleeter?

In late 2019, Deezer delivered Spleeter, a machine-learning tool for audio separation. Spleeter is a project from Deezer’s research department and is made open online as a Python library based on Tensorflow. Although source separation is a relatively obscure topic, its importance in music information retrieval (MIR) can significantly impact creating and consuming music.

While it may appear like an essentially straightforward process, reliable source separation is hard to achieve. Today, most professionally recorded music is created by recording each instrument on a different channel, and then the ultimate mixed track is produced in a step called the “mixdown.”

All the original tracks are combined for mastering and then digitally compressed for distribution in this final step. All the sound waveforms mesh together in a manner akin to an unchangeable chemical reaction that is difficult to undo. Nevertheless, Spleeter has made the impossibly complex task of source separation a lot simpler using machine learning.

Recommended:  15 Entertaining Wedding Reception DJ Games (2023)

A standard technique employed for source separation is time-frequency (TF) masking. Various kinds of sounds in the musical track answer to different frequencies. For example, the lead vocals would employ different frequency bands as opposed to the drums. The blend of frequencies that make up a song is filtered by TF masking, enabling us to pick and decide which frequencies to keep. What persists after this process is the isolated stem of the instrument that we want to separate.

How to keep track of progress in the music source separation field?

One of the best ways to keep up on progress in problems like this related to music is the annual MIREX competitions, a set of competitions held every year in conjunction with the ISMIR (International Society of Music Information Retrieval) conference. 

You can pretty much look at the competition results and grasp where the field is. Sure, someone in the industry might have beaten the winner by a percent or two behind closed doors, but by and large, this is a great way to know what’s up with serious problems at the crossing of music and machine learning. 

Just the fact that competition is happening tells you that problem is not solved. Otherwise, it would be an extremely boring competition.

Though they haven’t covered this in the last couple of years, until 2016, one of the competitions was “singing voice separation.” You can learn all about the results, but it should tell you that only isolating vocals is as far as the competition goes. The results were alright, and the field was ready to move on to separating instruments from music.

Other Methods of Separating Instruments from Music

When people started using samples, they would find parts on recordings where you only heard, for instance, the drums play. They would then sample/record those bits and pieces and work with them.

Another method was to use recordings that were made when stereo mixing was being done. Some of the old Zepellins and Marvin Gaye, for instance, have the vocals in the center of the mix, with drums panned hard right, and other instruments swept hard left. We get a measure or two of the drums without vocals on top, and we get ourselves a drum sample.

While EQing can help nowadays, there is far too much overlap for this process to be really thriving on its own unless you’re scanning for something that sits in its own range of a tune that’s produced well, like bass. For vocals, the most typical practice relies on how most vocal music is mixed; vocals down the center, instruments separated to the sides. 

By turning a stereo track to the left and right mono signals, phase inverting one side, then playing them back together, you’ll eliminate everything down the middle, normally just the lead vocal. By rearranging an instrumental track over the original full mix, you can separate a vocal.

This isn’t so simple nowadays, mainly because we’ve become better at creating a stereo image. Vocals are still in the middle with other instruments like the drums (normally the bass drum and the snare drum) and the bass line.

Those doing remixing sometimes get access to the original multi-track recordings. These have all the separate instruments intact on individual tracks, just as they were recorded. You need to have some great contacts and be a big shot for a label and artist to reach the original recording.

The song includes a range of sound frequencies. Imagine these frequencies as “audio colors.” The bass is red, and on the other specter of the rainbow is high-pitched purple. To separate just one instrument in the music, you can refine the spectrum, so you only have the range you want — just the “red,” for instance, or just the “blue,” like twining filters over a lens, where some colors are blocked, and others are let through.

With software, it’s moderately easy to keep just the exact frequencies that cover a particular sound in the mix and filter out the rest. A few passes like this with diverse settings can result in imperfect but workable “separated” tracks.

Recommended:  JBL vs. Sony Headphones - Which Are Better? (Sep 2023 Update)

Stems & How to Separate Songs Into Them

Multitracks are individual sections of a song. For example, there are separate multitracks for vocals, strings, and synths within a single song. These are called “stems” (also “stem files” or “stem tracks”), and they typically come from the original recordings from the studio before final mixing and mastering.

Stems tend to be a group of instruments and vocals mixed together. It’s taken from the multi-tracks, and you will have the lead vocals, backup vocals, bass, drums, guitars, etc., on individual stems.

For instance, a typical recording of a drum set may have quite a large number of tracks. Two to three microphones on the bass drum, same on the snare, stereo overhead mics, room mics, and so on – each of these will be found on the individual stereo or mono tracks in the multi-track recording.

A stem can be all those tracks mixed down to a stereo stem. Sometimes you can get the close mics as a separate stem and the room and overhead pair on another stem.

To make this simple, consider that each instrument/performer (including vocals) often lays down multiple tracks for their instrument in the studio. 

If the group has two guitarists, each guitarist may record their part many times and playback simultaneously or superimposed. This is called layering/widening. The point is that multiple recordings layered make their part sound “fuller” and have a more tonal body and presence. 

Most times, it’s done to create vocal harmonies in vocals and give more fullness vs. a single track/take. With two guitarists, let’s say they do four tracks each for eight total guitar tracks. In this example, one of those eight could be called a stem (though non-combined original takes are almost universally just called “tracks”). 

How to isolate or buy Stems?

Almost all DAWs have a menu option for isolating stems. In Ableton, it’s in the main menu and called Export Stems. 

In Fruity Loops, you even can configure which tracks you want to use to create the stem files and will end up with one file for each configured track. That includes audio tracks, effects, and basses.

In some cases, stems can be obtained straight from the studio/mixer by consent of the copyright holders, sold/provided as such (stems) under the implied intent of remixing, or they can be “extracted” via a few other processes. 

Be conscious that you are using copyrighted material belonging to someone else, and you will need explicit approval to use it outside of your bedroom. Yes, the drum samples are copyright protected and not open to use as you may see fit.

Finally, stems for DJs can be bought at online music archives like Beatport, Juno, Traxsource, with more online stores joining regularly. There are sets of bass and drum samples and libraries available for purchase online. 

Now that electronic music has become more mainstream, many artists will offer their “stems” up for sale on iTunes or give them away for free in remix contests.


Consider Better Headphones

To accurately monitor the isolated instrument tracks, a good pair of headphones is essential.

Personally, I’m using the Audio-Technica ATH-M50x (Amazon link). I found them to be incredibly reliable and comfortable to wear for long periods of time. The detachable cables and foldable design also make them convenient for me when traveling.

I’d say I can store them easily, but I don’t even store them, as I use them daily.

These are high-quality headphones, and one can easily call them industry-standard. From what I’ve seen, they’re commonly used by musicians, producers, and engineers in professional recording studios. After all, they need to hear individual instrument tracks in detail in order to make mixing and mastering decisions.

There’s not much to say here, it’s a versatile and reliable headphone, and its accurate sound reproduction and excellent sound isolation make it a great choice for all studio work, including isolating instruments in songs. Highly recommended if you’re looking for a change!

Tray Fiddy

Tray has come to terms with the fact he will probably never be a famous DJ.... but that hasn't stopped him from mixing and researching audio equipment. Tray has over 12 years of experience DJing at home and events.

Recent Posts