What's new

Resynthesis synthesizers

I'm demoing Halion 7 and the Spectal OSC is a delight.
I've only thrown two short gong samples at it and have got lost in using it.
I've taken to it in a way that I never did with Falcon.
Although, I should give version 3 a go before selling it.

Sale ends in a few days and Absolute 6 seems to be on the cards.
Very wise: don't sell Falcon yet!

I find Halion is great and easy for the most basic stuff; but I still haven't learned how to put everything together in the way that I can in Falcon. I just need more time, though. It sounds so good.
 
Very wise: don't sell Falcon yet!

I find Halion is great and easy for the most basic stuff; but I still haven't learned how to put everything together in the way that I can in Falcon. I just need more time, though. It sounds so good.
I have both thx to Simon, however despite having purchased HALion a year prior I prefer working with Falcon, but they're both brilliant and so I sold Omni. No regrets.
 
You can only use 1 sample per preset. If you wanted to do that sort of thing then something like Novum would be an option, although it is more a granular synth.

If a synth has a formant control then you can reduce the amount of chipmunking by inversely keytracking that.
Dawesome's upcoming Myth synth (set to be released during NAMM in January iirc) is apparently supposed to do sample resynthesis without chipmunking etc. Using a mix of physical modeling and other forms of synthesis. IDK if it will accept multisamples though... of course a workaround would be to use one plugin instance/sample but that could end up using too much cpu (as well as being tedious to set up for midi input).
 
Dawesome's upcoming Myth synth (set to be released during NAMM in January iirc) is apparently supposed to do sample resynthesis without chipmunking etc. Using a mix of physical modeling and other forms of synthesis. IDK if it will accept multisamples though... of course a workaround would be to use one plugin instance/sample but that could end up using too much cpu (as well as being tedious to set up for midi input).
Maybe @c0nsilience or @Peter V can lift up the veil a little bit? I think @DANIELE and @lychee should join Dawesome’s Discord server(s) either way, because their synths are the avant-garde and likely candidates to evolve more and more into the resynthesis realm.

From the initial post in the Dawesome thread:

“Some of the features:
* Velvet FM - a new form of FM synthesis. Its very simple: no algorithms, no operators. Just a few dials that allow you to tweak the sound. It can do classical sterile, glassy FM timbres, but also soft, organic, analog-ish ones

* you can import samples. These are "re-synthesised" to something like a "liquid wavetable". Its not a faithful playback like in a sampler, instead it trys to re-model the sample with a synth engine. Afterwards you have all these funny dials to change the sound

* under the hood there are innovative spectral, physical modelling, wavetable, FM and analog-modelling algorithms ... but its not an either or, its more like all this is brewed to one cohesive magic potion”
 
Maybe @c0nsilience or @Peter V can lift up the veil a little bit? I think @DANIELE and @lychee should join Dawesome’s Discord server(s) either way, because their synths are the avant-garde and likely candidates to evolve more and more into the resynthesis realm.

From the initial post in the Dawesome thread:

“Some of the features:
* Velvet FM - a new form of FM synthesis. Its very simple: no algorithms, no operators. Just a few dials that allow you to tweak the sound. It can do classical sterile, glassy FM timbres, but also soft, organic, analog-ish ones

* you can import samples. These are "re-synthesised" to something like a "liquid wavetable". Its not a faithful playback like in a sampler, instead it trys to re-model the sample with a synth engine. Afterwards you have all these funny dials to change the sound

* under the hood there are innovative spectral, physical modelling, wavetable, FM and analog-modelling algorithms ... but its not an either or, its more like all this is brewed to one cohesive magic potion”
All this seems promising, to see in practice.
But I would like developers like Dawesome, Sonic Charge or others to create these tools with all types of users in mind, and not just for sound designers.

I think synthesis can also help recreate the world of acoustics and revolutionize our instrument libraries.
My dream would be a new type of sampler where we could take snippets of sounds that we could transform by morphing in order to recreate an original instrument.

Tomofon and Synplant are leading the way, I'm waiting to see what Kult will be capable of.
 
All this seems promising, to see in practice.
But I would like developers like Dawesome, Sonic Charge or others to create these tools with all types of users in mind, and not just for sound designers.

I think synthesis can also help recreate the world of acoustics and revolutionize our instrument libraries.
My dream would be a new type of sampler where we could take snippets of sounds that we could transform by morphing in order to recreate an original instrument.

Tomofon and Synplant are leading the way, I'm waiting to see what Kult will be capable of.
I think that all of the Dawesome plugins are very user friendly and accessible to all users; but they certainly don't aim to do the thing that you describe in your second paragraph. I imagine that a lot of AI would be required to do what you are looking for there. But I get the impression that a huge amount of work and experimentation is required to build a modelled acoustic instrument from samples. I'm not sure how much scope there is for the machine learning systems that we have now to help with that. Well, 'I'm not sure' is an understatement - I have no idea!

Kult is nothing like Tomofon (wavetables from samples, capturing the quality of the original)) or Synplant (re-synthesis with controllable - but somewhat blindly controllable - modifications). It is a synth with interesting wave forms, great modulation options, and set up so that it is incredibly easy to arrive at pleasing sounds whilst also allowing really good sound designers to introduce all sort of nuances. I don't now if that is what you are looking for; but I find it a lot of fun to both play presets and create sounds really quickly.

I've been playing about with wavetables in Falcon lately, using Vital to help create the wavetable. Some acoustic sounds really transfer very well with a bit of work to increase the dynamic range. And you can add scripted legato and some options to simulate velocity layers somewhat. I've actually found the results in some cases to be more pleasing and more acoustic sounding than Tomofon. Actually, Falcon is pretty good for a mixture of physical modelling, samples, additive and wavetables. Though I've only barely scratched the surface myself. It doesn't do resynthesis, though; and it does require quite a bit of work.
 
Firstly, Happy New Year to you, with lots of love, good health and success.

I don’t think that will really happen in the way you’re hoping it will.
I hope you are wrong and that the innovation created by Synplant will lead developers to use the captured sounds to reconstruct the original instrument in a more expressive way than the conventional sample.

This was the case with Wivi, unfortunately abandoned.
A series of instruments based on resynthesis, very expressive and which does not have to be ashamed of the competition despite the age of the software.
Unfortunately it is not an instrument creation tool, only Tomofon comes close to what I want, but it is not complex enough (layer management, keyswitch...).

I could try with advanced tools like Phaseplant, vital, Msoundfactory, Falcon...
But here it's the opposite, these are complex tools that are not dedicated to the task that I would like assigned to them, everything has to be built and I don't think I have the skills for that.

If developers go through this, think about it, synthesis can also be useful for the creation of acoustic instruments, I have as proof the experiments of Joel Blanco Berg:

 
My point is, we’ve had seriously funded academic institutions such as IRCAM come up with Modalys. We’ve had Yamaha VL1 in the 1990s. And to my ears physically modeled instruments today still do not outshine those -by now- decades old models. They sound decent if emulating acoustic instruments is your goal, and great if you just like the synthesis part. But never do they sound like a paradigm shift is near the horizon.

If you want playable instruments, somewhat sounding like acoustic instruments, that’s great. If you want to entirely replace samples that’s not so great, because at the end of the day the music you’ll end up with sounds like Frank Zappa’s Synclavier-era output at best. Again, I happen to like that music in all of its artificial mechanical-sounding glory. But for mockups, I can’t say I share your hopes.

The Acousticsamples attempts are good, and I love their instruments, but most of my sampled woodwind ensembles sound a hell of a lot more convincing like we’re hearing an actual ensemble playing on a stage. And those (VWinds) are instruments that have been carefully tweaked and modeled to sound like woodwinds. They do only to a certain extent, despite being even sample based.

So, imagine an instrument that allows its users to input a multi-sample, analyze it on the fly, resynthesize it and then come up with a playable “real sounding” synthesized acoustic instrument on the fly, like Synplant. That means Acousticsamples can immediately stop their careful tweaking and most sample developers can pack up their shop too. At least, that’s my interpretation of what you hope will happen. It may. But I guess I am doubtful whether we’ll see a Tomofon 2 soon that can do this.

Here’s me abusing some physically modeled woodwinds, just for a fun illustration. Nice sounds, also the current state of the art I guess, i.e. cool but flawed.

 
My point is, we’ve had seriously funded academic institutions such as IRCAM come up with Modalys. We’ve had Yamaha VL1 in the 1990s. And to my ears physically modeled instruments today still do not outshine those -by now- decades old models. They sound decent if emulating acoustic instruments is your goal, and great if you just like the synthesis part. But never do they sound like a paradigm shift is near the horizon.
Tomofon is close to that, it would have been enough for its engine to manage the layers and for its resynthesis to be cleaner (it has difficulty synthesizing anything that is not tonal like noise) for us to have a sampler next gen who might have changed the game.

Acoustisample would not have had to fear this change, as they are in a hybrid approach which also tries to revolutionize sampling.

I am far from wanting to ruin companies, but I find that sampling deserves a small revolution and that the big developers are not the ones who are going in this direction even though they are the ones who could have the means.
 
I am far from wanting to ruin companies
Oh I wasn't implying that at all. The argument I tried to make was: seeing how it takes a professional company lots of work to arrive at new levels of “realism” (VWinds) with careful programming and scripting, I can’t imagine some Tomofon 2 import sample and resynthesis algorithm taking over the process “automagically” any time soon with similar results.

I also get your reasoning and to a degree I even agree. I was merely stating that I do not share your optimism. ;)
 
My point is, we’ve had seriously funded academic institutions such as IRCAM come up with Modalys. We’ve had Yamaha VL1 in the 1990s. And to my ears physically modeled instruments today still do not outshine those -by now- decades old models. They sound decent if emulating acoustic instruments is your goal, and great if you just like the synthesis part. But never do they sound like a paradigm shift is near the horizon.

If you want playable instruments, somewhat sounding like acoustic instruments, that’s great. If you want to entirely replace samples that’s not so great, because at the end of the day the music you’ll end up with sounds like Frank Zappa’s Synclavier-era output at best. Again, I happen to like that music in all of its artificial mechanical-sounding glory. But for mockups, I can’t say I share your hopes.

The Acousticsamples attempts are good, and I love their instruments, but most of my sampled woodwind ensembles sound a hell of a lot more convincing like we’re hearing an actual ensemble playing on a stage. And those (VWinds) are instruments that have been carefully tweaked and modeled to sound like woodwinds. They do only to a certain extent, despite being even sample based.

So, imagine an instrument that allows its users to input a multi-sample, analyze it on the fly, resynthesize it and then come up with a playable “real sounding” synthesized acoustic instrument on the fly, like Synplant. That means Acousticsamples can immediately stop their careful tweaking and most sample developers can pack up their shop too. At least, that’s my interpretation of what you hope will happen. It may. But I guess I am doubtful whether we’ll see a Tomofon 2 soon that can do this.

Here’s me abusing some physically modeled woodwinds, just for a fun illustration. Nice sounds, also the current state of the art I guess, i.e. cool but flawed.


I hope you don't mind that I felt I had to play along with your demo using a rather messed up, metallic patch in Softube Modular. It just felt right.
 
the experiments of Joel Blanco Berg:
Thanks for posting this I wasn't aware of it before. It inspired me to have a poke around in HISE's wavetable synth. Here's a quick little demo. Obviously nothing compared to a real instrument or Joel's VIs but considering it's maybe the second time I've ever touched a wavetable synth and I only spent 10 minutes with it I think the technique shows promise.

 
My point is, we’ve had seriously funded academic institutions such as IRCAM come up with Modalys. We’ve had Yamaha VL1 in the 1990s. And to my ears physically modeled instruments today still do not outshine those -by now- decades old models. They sound decent if emulating acoustic instruments is your goal, and great if you just like the synthesis part. But never do they sound like a paradigm shift is near the horizon.

If you want playable instruments, somewhat sounding like acoustic instruments, that’s great. If you want to entirely replace samples that’s not so great, because at the end of the day the music you’ll end up with sounds like Frank Zappa’s Synclavier-era output at best. Again, I happen to like that music in all of its artificial mechanical-sounding glory. But for mockups, I can’t say I share your hopes.

The Acousticsamples attempts are good, and I love their instruments, but most of my sampled woodwind ensembles sound a hell of a lot more convincing like we’re hearing an actual ensemble playing on a stage. And those (VWinds) are instruments that have been carefully tweaked and modeled to sound like woodwinds. They do only to a certain extent, despite being even sample based.

So, imagine an instrument that allows its users to input a multi-sample, analyze it on the fly, resynthesize it and then come up with a playable “real sounding” synthesized acoustic instrument on the fly, like Synplant. That means Acousticsamples can immediately stop their careful tweaking and most sample developers can pack up their shop too. At least, that’s my interpretation of what you hope will happen. It may. But I guess I am doubtful whether we’ll see a Tomofon 2 soon that can do this.

Here’s me abusing some physically modeled woodwinds, just for a fun illustration. Nice sounds, also the current state of the art I guess, i.e. cool but flawed.


I quite like how your technological pessimism about the limits of realism, which I mostly share, leads to real optimism about the future of music making with these instruments.
 
Thanks for posting this I wasn't aware of it before. It inspired me to have a poke around in HISE's wavetable synth. Here's a quick little demo. Obviously nothing compared to a real instrument or Joel's VIs but considering it's maybe the second time I've ever touched a wavetable synth and I only spent 10 minutes with it I think the technique shows promise.


That's not fair! That's already brilliant! I'm sure that the process of tweaking to get things as good as they could be could take nigh on forever; but you've already got such a good result. Except for the chords; it weirdly turned into a much more synthetic sound with chords.

I have some woodwind-esque sounds in a patch sampled from my Taiga synth. They aren't as realistic, but I like the way it sounds like different instruments as you move through the note range.


View attachment Demo of Taiga Woodwinds.mp3
 
Top Bottom