Paradoks
by Native Instruments

How to make VST strings sound real: how Paradoks got authentic orchestration in C’est Toi

Making music in the box is all fun and games until you need to replicate instruments that humans have been hearing outside the box for centuries. Strings and pianos, for example, are notoriously difficult to recreate in a digital environment, even with high-end VST instruments and samplers like Kontakt. But if anyone has mastered this challenge, it’s Paradoks. His latest release on Nora En Pure Purified Records, “C’est Toi,” is a testament to his ability to turn MIDI-based VST strings into some of the most lifelike, expressive string arrangements you’ll hear.

Paradoks, a Belgian producer known for balancing club energy with emotional depth, relies on string-driven chord progressions to bring an authentic feel to his music. Since this is something so many producers struggle with, we figured this was the perfect time to have him break it all down.

In this interview, he shares why VST strings are so tricky, how to work with complex libraries like Kontakt’s String Ensemble, how to create realistic swells and movement, and when to use – or avoid – EQ and compression. You can incorporate some of these techniques in your music using the free Kontakt Player included in Komplete Start.

Try Kontakt free

Jump to these sections:

What’s the biggest challenge in making virtual strings sound as expressive as a live performance?

In general, and I think this applies to other instruments beyond strings (take a synth melody for example), the biggest challenge is to capture the nuances of human expression. Talking about strings specifically, a human can have slight pitch fluctuations, bow pressure variations, dynamic movement and all kinds of organic imperfections.

In my opinion, these “perfect” imperfections and dynamic performances are what create emotional depth in a strings section. If you think from a purely automation standpoint, imagine assigning an automation lane for all these parameters to, let’s say a MIDI violin, and would capture a human playing it, there would be hundreds of slight variations in the automation.

With a sample library, it’s important to replicate that by paying close attention to detail and automating dynamics, articulations, velocity layers, and expression. If we don’t focus on these, it’s easy for a string section to sound robotic and emotionless.

It reminds me of how AI-produced music often feels, lacking the emotion that real human composers bring. In my opinion, all the imperfections are what give music its soul, something I think AI still struggles to replicate in a convincing way. I think mainly because music is something genuine – something real. We capture a moment and transmit a feeling, and that’s what we need to translate in order to make not only our strings more expressive, but our music as well. If we don’t add these nuances in virtual strings, the result can feel a bit emotionally disconnected, like an AI generated composition, if you know what I mean.

Of course, no virtual library perfectly replicates the nuances of a live orchestra, but with the right techniques, we can get pretty close.

Pro tip from Paradoks: When automating string dynamics, imagine how a bow moves naturally: longer phrases need smoother mod wheel curves, while shorter notes benefit from sharper moves.

How do you approach working with a deep orchestral library like String Ensemble – Symphony Series differently than a simple synth string patch?

With my Paradoks project, I produce electronic dance music, so most of the time I use the strings as a support rather than the main part. I generally play more sustained chords since there is often a synth melody or vocal sitting on top, and often layer both a patch of “real” virtual strings like the Symphony Series with a synth string patch to get the best of both worlds.

Paradoks

With simple synth strings, the approach tends to be more straightforward. The goal is often harmonic support and texture rather than realistic emulation. Synth strings work well in electronic productions when treated as such, and modulating it can give character to the sound and bring it to life.

But recently I’ve also been exploring a new alter ego project where I’m making cinematic electronic/acoustic tracks without drums, aiming for a pure listening experience. I can’t say more for now but it’s been really liberating and sparking my creativity.

Pro tip from Paradoks: When using synth strings, use subtle modulation (filter cutoff, detuning or vibrato etc.) to add warmth and movement to synth strings.

And in this kind of calmer music, strings can really come to life and be more of a main protagonist rather than a supporter, and in this case, I use more of what a library like the Symphony Series can provide. For example, I focus more on articulation, dynamics, and orchestral realism compared to a more straightforward synth string patch. Each articulation (legato, staccato, pizzicato, etc.) serves a specific purpose, and the choice of articulation can have a big impact on the message you want to tell.

String Ensemble
String Ensemble

I spend time exploring these articulations and combining them to mimic how real string players move between notes. Velocity layers and mod wheel automation play a huge role in adding movement and life.

In the Symphony Series Strings, velocity mainly affects short articulations like pizzicato and staccato, controlling their volume and attack.

Mod wheel in String Ensemble
Mod wheel in String Ensemble

For sustained notes and legato passages, however, dynamics and expression are controlled through the mod wheel, just like how a real string player uses bowing intensity for crescendos and swells.

Using articulations in String Ensemble
Using articulations in String Ensemble

Pro tip from Paradoks: You can layer articulations (legato and staccato for example) by assigning them to the same key switch note, so that both play at the same time giving you a faster attack on the note thanks to the staccato but also keep the sustained legato note.

Notice how on this picture both Sustain and Staccato are mapped to the same key of F6. So now I have the best of both worlds: the attack from the staccato and the sustain from the legato.

Do you prefer to play your string parts live on a MIDI keyboard, or do you draw them in manually?

Generally a combination of both. I’ll always prioritize playing the string parts, especially when I’m in the studio. Firstly, because I come from neo-classical piano before I became an electronic music producer, and secondly because it’s the simplest way to capture the human touch and emotion when recording it, such as timing, velocity, dynamics with the mod wheel etc. I have two approaches depending on what kind of string section I need.

I often start by playing the full string ensemble chords with both hands, then record automation afterward, like mod wheel and volume adjustments. Since I unfortunately was born with only two hands, I often have to record a few passes to capture all the necessary parameters.

Paradoks plays the full string ensemble chords with both hands, then records automation afterwards
Paradoks plays the full string ensemble chords with both hands, then records automation afterwards

I also often record the instruments separately using different instances of different instruments on different channels, splitting various violins, violas, cellos, etc. That way, I can record each part’s dynamics and articulation independently. For example, I’ll play a violin line with my right hand while adjusting the mod wheel with my left.

And if I want different expressions for two different violin notes, I’ll just record them on separate channels using a different performance. One channel might have a sustained legato melody, while another adds pizzicato or staccato notes to give it more movement. It really just depends on what the track needs. This method is the most flexible one both for composing but also for mixing. The downside is that it adds so many channels in a project!

Pro tip from Paradoks: Make sure to use a consistent reverb throughout all these channels, as it makes them all part of the same room and has a more coherent sound. I like to use a send & return to the same reverb. Also pan them to mimic an orchestral seating.

After recording it, of course I’d be lying if I said I didn’t go into the MIDI clip I recorded and tweak some of the recording to improve it, whether by moving some notes around or slightly changing the velocity. But you’ll never see me fully quantize notes, though – I’ll always keep some natural timing in the recording as it also adds to the idea of capturing a moment.

I also draw in notes manually from time to time, for example when I’m adding more rhythmical patterns which can often be hard to play, or when I want the progression to evolve further. Or when I’m in the airplane on tour and can only use my laptop without a MIDI controller. Then I simply draw notes in with different velocities and purposefully move some notes ever so slightly off the grid, and record automation using my trackpad.

Sometimes I’m lazy and/or want to ideate really quickly, in that case I simply draw in the most basic automation, but I noticed that more often than not it’s not sounding quite natural and is mainly just a placeholder for when I can focus fully on the strings section.

However, it is by far more fun to record them than to draw them in for me.

Paradoks DAW

Pro tip from Paradoks: Especially when you draw the notes in, a good technique is to humanize the performance by slightly adjusting the timing and velocity of the notes to mimic the natural inconsistencies of a live string section. If you’re using Ableton, the Humanize function combined with some subtle velocity randomization using the velocity deviation tool can quickly do wonders.

What’s your approach to creating swells, crescendos, and dynamic movement with a virtual string section?

Dynamic movement such as swells and crescendos is really a very big part of the storytelling. The notes create the melodies, tension and resolution, but the swells and crescendos are what impact the energy and contribute to the emotional storytelling, creating moments ranging from a soft and peaceful section to a full on intense drama. In dance music, crescendos are slightly less of a thing because the tracks often have limited dynamic range, as they are compressed for the club, but it is still possible to incorporate them in the breakdowns.

Pro tip from Paradoks: The mod wheel is king for swells and crescendos, combined with volume automation. I often use the mod wheel for dynamics and control the volume with a fader to shape swells and crescendos in real-time. I try to imagine how a real section would build or release tension and mimic that motion through automation.

Paradoks live

One track that really inspires me in this regard is “Time” by Hans Zimmer. I absolutely love how that piece builds up towards such a strong peak. I think the strings in “Time” were likely recorded with real players, but the expressive swells, crescendos and general dynamic movement are things we can replicate with virtual instruments using the mod wheel. For instance, a crescendo might start subtly with the lower strings before the upper strings join in. This again shows the benefit of using different instances on different channels, as each layer can have its own expression.

I like to make use of both the ensemble patches, to create more cohesive movement, while also using different instances on individual channels for different instruments to enter and exit at different times.

In my track “Sense of Wonder” for example, I used different string articulations and layers, one being a cello solo, another one being the sustained ensemble and a third one playing the notes you can hear at the 2:20 section in the radio edit. It’s an old track and I’ve gotten much better at strings now but I think it underlines my point.

How do you approach EQ and compression on virtual strings – do you keep them natural or sculpt them more?

I prefer to try to keep my strings as natural as possible, but it really depends on what other instruments or vocals are present. EQ and compression is highly contextual, so my approach with strings is the same as my approach to anything when I’m producing and mixing a track.

I’m always so selective of the frequency range of every instrument and melody note when producing, making sure every element lives in its own frequency. It’s way better to place a low mid string section if the accompanying vocal is in the higher range, for example. And sometimes simply not playing some MIDI notes is better than cutting them out using EQ. This approach not only keeps the mix clean but also guides the listener’s ear to the more emotionally impactful elements you want them to hear.

Now there will always be a natural overlap in frequencies, and this is where EQ and compression comes in handy, and then I ask myself some questions to guide my mixing decisions. Are the strings supportive of a vocal or synth lead? What other frequencies are there?

If so, I’d naturally sidechain some of the string frequencies to the lead vocal, in order to duck and carve out some frequencies to give space to the vocal or synth lead. I mainly produce electronic dance music and I find that very often if I don’t tame the low-mids of the strings, things can get a bit muddy in the context of the mix.

I also often have some sort of strings section in the breakdown which is often accompanied by something like a Reese bass sitting underneath everything, so I don’t need the low frequencies of the strings and cut them with EQ or simply delete the low notes of the ensemble there.

Sometimes I also use a dynamic resonance suppressor to tame resonant frequencies or add some harmonic saturation in the high frequencies for extra presence and “shine” in a busy mix.

Short answer: I prefer to keep them natural, but more often than not it will need at least a tiny bit of cleaning up to fit the context of the track.

Pro tip from Paradoks: Better decisions when composing and producing such as sample and instrument selection as well as note selection in order to fill the frequencies naturally, will make mixing way easier.

If a new producer is just starting with VST-based string samples and Kontakt libraries, what’s the first technique you’d recommend for getting a more realistic sound?

Care less about realism and more about emotion. I think it’s less important to make it sound realistic than it is to make something that people can feel. Of course as we said, the more depth and realism we add in the performance, the more emotions we can create, but I think for a beginner producer it takes the pressure off to focus on feeling rather than technicality. Music is about making people feel something, and sometimes, a slightly imperfect note has more emotion than a perfectly quantized one.

Paradoks live

I also recommend spending time and experimenting with dynamics and articulation. Learning how to use the mod wheel to create dynamism makes a world of difference. Also playing with different articulations and understanding how for example legato, pizzicato, and tremolo each play a role and can create very different parts in a musical score.

One really cool thing to do is to explore MPE technology, as this tool really has a lot of potential to add expression when composing music.

Start making your VST strings sound real

A big thanks to Paradoks for breaking down the art of making VST strings sound as natural and expressive as the real thing. His insights on working with complex libraries like Kontakt’s String Ensemble, crafting dynamic swells, and knowing when to apply – or skip – EQ and compression gave a deep look into what it takes to bridge the gap between digital and organic sound.

If there’s one key takeaway from this conversation, it’s that realism isn’t just about having the best libraries – it’s about how you use them. The movement, articulation, and layering techniques Paradoks shared are what truly make the difference in creating lifelike strings. Whether you’re producing club tracks, cinematic scores, or anything in between, the way you shape and control these elements matters far more than the plugin itself.

Once again, big shoutout to Paradoks for sharing his process. Be sure to check out “C’est Toi” and explore Kontakt to level up your string programming.

Try Kontakt free

Related articles