What's new

Developers of Physical Modeling Orchestral Instruments

muziksculp

Senior Member
Hi,

With regards to VSTs of Orchestral Instruments/Sections using Physical Modeling Technology, i.e. Developers like Audio Modeling, and Sample Modeling. Are there other developers doing this ? If not, I would love it if there were more developers working on this specific area of P. Modeling.

Applied Acoustic Systems (AAS) uses PM, but so far they have not tackled the Orchestral Instrument emulation via P. Modeling. Maybe they will in the future, and some P. Modeling Instruments made using NI's Reaktor. Don't know much about these.

Are there other developers that I should add to this very short list ?

I really hope this area of music technology evolves faster, and has more developers, researchers, ..etc. working to take it to the next level.

What are your thoughts about this topic, and are you optimistic about the future of this technology in the area of emulating Orchestral Acoustic Instruments ?

Thanks,
Muziksculp
 
There's this - https://xtant-audio.com/product/model-brass/

Also the field of physically modeled pianos may be a bit more crowded - pianoteq, physis, roland V

There's also been an explosion of analog modeled amps, synths, and fx, which is kind of in the same vein, but yeah, not orchestral yet...

I think if there's ever a trend back to dry orchestral libraries, the physical modeling instruments will get more popular.
 
I love physical modeling, but I don’t think there will be a hard push for PM orchestral insturments anytime soon. I’m working on something using PM now and there is very little info available out there on it. It seems like you have to do your own R&D. Couple that with the fact that people want orchestral insturments to sound exactly like the real thing. Its just asking for a headache. Other types of music aren’t so particular when it comes to authentic sound, so I think more experimental stuff will come first and then more realistic insturments afterwards. I hope I’m wrong, but it from a developers point of view it seems like to much work.
 
I wonder if any of the currently popular Orchestral sample library developers is experimenting with Physical Modeling Technology ?
 
honestly I think a lot of study of both + clever editing and scripting of actual recordings will inevitably the the best bet.

mainly because our ears are already fond of, and used to recordings in these places - to the point that regular people instantly notice PM instruments if not well dealt with in a mix - but the layman has no idea they aren't listening to the most unrealistically long brass notes in a mix - because it's samples from sony/teldex/ect.

so the texture and acoustics are basically the most important thing to a layman - over actual realism. Although that might be completely different to actual musicians who notice the sterility very quickly
 
I've been developing physically modeled orchestral instruments for the past 4 years on and off. It's actually pretty easy to get ~95% of the way towards realism, but that last 5% becomes unbelievably difficult, and is completely uncharted water (beyond a certain level of precision, physicists don't fully understand how bowed strings work, what rosin actually does, so good luck!). Samples are always at 100% realism if you use them properly, so it's tough to compete with.

Despite the praise that companies like SM get for playability and getting "very close", I rarely hear their products in final mixes. The instruments have to be a) Sounding as good as samples, and b) As quick/easy to use as sample libraries, or they're just not used.

I don't see how most sample library companies could justify the R&D costs involved in developing physically modeled instruments when they can safely throw money at another recording session and make good, safe money. Having a theoretically perfect physical modeling instrument could also negate the need for sample library after sample library recorded in different halls, configurations, with different articulations, so it's not entirely self-evident to me that it's in any sample library company's best interests to push physical modeling until it's already a clear threat (which might be never).

I think if there's ever a trend back to dry orchestral libraries, the physical modeling instruments will get more popular.
I mean this in a nice way, but I find it funny how people's preconceived notions about PM instruments have been shaped. It's trivial to bundle a properly tweaked IR (or good room simulation) into a plugin to make it sound as wet out of the box as you like.

Same goes for the idea that PM instruments are CPU intensive (I can run 700 bassoons simultaneously in my engine now in real time!), and that they take a lot of programming to get sounding good. These are trivial problems compared to the realism issue.
 
I mean this in a nice way, but I find it funny how people's preconceived notions about PM instruments have been shaped. It's trivial to bundle a properly tweaked IR (or good room simulation) into a plugin to make it sound as wet out of the box as you like.

But this is actually reinforces my preconceived notion - in order to use a physically modeled virtual instrument, IRs are required to make it wet. :)
 
But this is actually reinforces my preconceived notion - in order to use a physically modeled virtual instrument, IRs are required to make it wet. :)
But you, the end-user, wouldn't have to worry about it out-of-the-box in an ideal world!

Plenty of dryly recorded sample libraries do this already. OT's solo instruments are dryly recorded, but wet out-of-the-box, to give one example.
 
Different perspectives, I agree with both...

Samples are plagued with issues when stringing samples together or looping them. e.g. switch articulations and suddenly the tone is oddly different... or "this specific note needs a bit more bite than my sustain, but not as much as my marcato" On the other hand, from demos I've heard from samplemodeling and audiomodeling, this is exactly where they excel.

But it also makes sense to say that physical modeling is only 95% of the way there trying to model all the details that a recording can trivially capture. And we're so used to having those details from samples that we take for granted how hard it might be to model all of that.
 
You're only referring to the tone right? Not performance (like legato etc).
Tone is of course realistic when it's recorded live. Unless it's processed strangely.
In terms of performance there is of course a long way to go... some things will probably never sound realistic with samples.
Yeah, timbre, sorry if I didn't make that clear. What I mean is that a sustain patch is infinitely more usable in production than an extremely agile PM instrument that's 95% of the way there but still has that synthetic quality that won't go away, because it sticks out. As a PM developer, that's really difficult to compete with.

You can compose around a sample library's limitations, but you can't really do that with a PM instrument at the moment (at least not with huge amounts of effort). I suppose this is because the main issue with sample libraries is the transitioning between notes, which are easier to mask or do carefully, but with PM instruments it's the actual sound itself that tends to be the sore point. That's not to say that the PM's do transitions perfectly either, which only adds to the problems.

I say this as an advocate of physical modeling, obviously, there's just still a long way to go.

But it also makes sense to say that physical modeling is only 95% of the way there trying to model all the details that a recording can trivially capture. And we're so used to having those details from samples that we take for granted how hard it might be to model all of that.
Especially when you can't tell what exactly it is that's missing from the sound. Maybe you can get a general "feel" for what's missing, but trying to define it in precise terms that can be simulated is nearly impossible.

You hear people say things like "You can really hear the rosin on a real violin", "You can feel the air moving in the room with this", or the dreaded "The players are really putting their hearts into this". What does any of this actually mean, precisely?
 
Hi,

Thank You for the interesting feedback, and discussion on this topic.

Maybe Physical Modeling tech. needs some type of technological breakthrough that will solve the Sonic Character, and timbre accuracy that Sampled Instruments offer these days.

Remember, before Gigastudio was around, we didn't have Streaming from Disc tech. to allow us to use large memory consuming instruments. Streaming from Disc was a key technology that changed all of the limitations we had prior to it's emergence.

If a similar breakthrough technology is developed that solves the sonic/timbre realism part of the PM challenge for emulating orch. instruments, things will change dramatically in this market.

I'm optimistic that sooner, or later this will happen, and then .... PM might even surpass sampling as a means of delivering virtual orch. instruments that are super realistic both in terms of timbre and sound, and their performance will also be more natural, and realistic sounding when compared to sampling.

That is why I feel that it might be wise for some of the leading sample library developers not to ignore Physical Modeling tech., I'm also guessing that there must be a lot of research going on in the field of PM in various Universities that might develop a technology that will will help move PM of Orch. Instruments to the next level.

As far as Sampling is concerned, I wonder how much more it can be improved from what we are at today ?

Cheers,
Muziksculp
 
Especially when you can't tell what exactly it is that's missing from the sound. Maybe you can get a general "feel" for what's missing, but trying to define it in precise terms that can be simulated is nearly impossible.
I'll throw something out there: it's a lot of the crap real players hate in our sound and practice long and hard in order to minimize. Inconsistent attacks, timbral fluctuations, miserably bad intonation etc. Analyze some of the things that make a lousy player lousy. It's easier to see what they are when they are painfully obvious. Once you got that, emulate them as randomized events with a certain range. Then narrow that range to levels resembling less lousy players. That might go pretty far.
 
I love physical modeling, but I don’t think there will be a hard push for PM orchestral insturments anytime soon. I’m working on something using PM now and there is very little info available out there on it. It seems like you have to do your own R&D. Couple that with the fact that people want orchestral insturments to sound exactly like the real thing. Its just asking for a headache. Other types of music aren’t so particular when it comes to authentic sound, so I think more experimental stuff will come first and then more realistic insturments afterwards. I hope I’m wrong, but it from a developers point of view it seems like to much work.

Which people?

Be it sampled or physically modeled, who's listening that closely other than the musicians themselves?

When I heard the sample modeling demos I was so blown away. I immediately knew that the only reason these instruments sounded fake in certain places was because they keyboardist didn't know how to play a saxophone or a violin. If I play a sample modeling saxophone (I'm a saxophonist), you'll think you're listening to Bird or Coltrane...or close enough. I 've seen the complaints about the SWAM violin's timbre (buzzsaw) but I heard the examples with the Stradivarius IR's and it sounded pretty darn convincing....I'm not a violinist, but still. If it's played masterfully, timbre will be less and less of an issue.

And the SWAM Orchestra example that user @rohandelivera did was absolutely stunning. So expressive and humanlike. As the technology continues to improve I don't see how PM will not be the new standard.
 
Especially when you can't tell what exactly it is that's missing from the sound. Maybe you can get a general "feel" for what's missing, but trying to define it in precise terms that can be simulated is nearly impossible.

You hear people say things like "You can really hear the rosin on a real violin", "You can feel the air moving in the room with this", or the dreaded "The players are really putting their hearts into this". What does any of this actually mean, precisely?

Lol, I know that's annoying. You need musicians with real good ears who also are familiar with how sound works. Professors at audio production schools are good candidates for this. They're 100% of the time musicians themselves that know how to translate sound into audio terminology. I know that because I'm one of them.
 
I immediately knew that the only reason these instruments sounded fake in certain places was because they keyboardist didn't know how to play a saxophone or a violin. If I play a sample modeling saxophone (I'm a saxophonist), you'll think you're listening to Bird or Coltrane...or close enough. I 've seen the complaints about the SWAM violin's timbre (buzzsaw) but I heard the examples with the Stradivarius IR's and it sounded pretty darn convincing....I'm not a violinist, but still. If it's played masterfully, timbre will be less and less of an issue.
I'm sorry, but as someone who has spent >$1000 on SM instruments over the past 10 years, they're not comparable to the real thing. It's not the lack of a player, it's not the lack of room sound, it's the instrument. They've had 10 years to get better.

Time and time again, I've bought those instruments thinking exactly the same as you do, and time and time again, I try to use them in production and end up just swapping the instrument with a sample library.

Lol, I know that's annoying. You need musicians with real good ears who also are familiar with how sound works. Professors at audio production schools are good candidates for this. They're 100% of the time musicians themselves that know how to translate sound into audio terminology. I know that because I'm one of them.
Talking with musicians helps (mainly on the player simulation side of things), but there are so many areas where musicians just don't know what's going on either. You get to a point where you just hear criticisms like "it sounds too much like a sine wave, it's too pure", which is correct, but says nothing about what the solution(s) to that might be.

No violinist I've met has told me something like "Despite the fact that physicists claim this doesn't happen, if you simulate the transmission of waves between the string and the bow, it will produce a shimmering artefact in higher frequencies which for some unknown reason makes the sound about 10% less synthy, and also naturally produces some of the bow noise without the need to fudge it by generating white noise, but not ALL of the bow noise". It's things like that. It's throwing $#!+ at the wall to see what sticks, and then scraping 95% of it off again when you realize most of it was a red herring.
 
I think "95% there" is rather generous, but it's worth a try!

A modeled violin needs more than overlayed noise: a real string is a mess of random inharmonic overtones and distortions, especially in the first 100 or so milliseconds of a newly bowed note.

AM's violin is more convincing than their lower instruments, especially in a mix, perhaps because the real violin is more acoustically "perfect", with a clean and quick response.
 
Last edited:
Top Bottom