# Accentize Chameleon - AI Matching Reverb



## derschoenekarsten (Aug 19, 2021)

https://www.accentize.com/chameleon/

Just found this plug-in by pure chance. It's incredible. If you're having trouble matching very dry with very wet libs, this is a godsend.
I'm in no way affiliated with the developer, just thought people around here might be interested.

In case anybody cares, here's some marketing blurb from the website:

_Chameleon is an intelligent audio plugin which uses artificial neural networks to estimate and model the exact reverb content of any source recording. You can build a reverb profile in seconds and easily apply it to dry studio recordings._

_create unlimited different unique reverbs with a single click_
_automatic parameterisation of dry/wet-mixing, stereo-width and pre-delay_
_the ideal tool for realistic ADR and foley matching_
_useful for creative sound-design or music-production_
_extract the natural room-impulse-response of any recording and export as a wav-file_


----------



## Soundbed (Aug 19, 2021)

derschoenekarsten said:


> https://www.accentize.com/chameleon/
> 
> Just found this plug-in by pure chance. It's incredible. If you're having trouble matching very dry with very wet libs, this is a godsend.
> I'm in no way affiliated with the developer, just thought people around here might be interested.
> ...


Wow this looks cool! Going to do some more research. Would love to hear anyone’s opinions on this (positive or critical).

Also looks like there is a demo version.


----------



## Trash Panda (Aug 19, 2021)

Initial results seem pretty promising for blending dry libraries into wet ones.

Here's a totally unscientific test of wet snares (BBCSO, AR1, Majestica) and a dry snare (Jaeger).


----------



## Soundbed (Aug 19, 2021)

Trash Panda said:


> Initial results seem pretty promising for blending dry libraries into wet ones.
> 
> Here's a totally unscientific test of wet snares (BBCSO, AR1, Majestica) and a dry snare (Jaeger).



Wow awesome! This might be an interesting way to match with AROOF too.(?)


----------



## dzilizzi (Aug 19, 2021)

Trash Panda said:


> Initial results seem pretty promising for blending dry libraries into wet ones.
> 
> Here's a totally unscientific test of wet snares (BBCSO, AR1, Majestica) and a dry snare (Jaeger).



I thought it did a perfect match with Majestica, really good with BBCSO, and not quite good with AR1.


----------



## Soundbed (Aug 19, 2021)

dzilizzi said:


> I thought it did a perfect match with Majestica, really good with BBCSO, and not quite good with AR1.


Tough to tell for me on these AirPods. Will need to listen when I get back to the studio. Seems okay at least for AR1/AROOF.


----------



## Trash Panda (Aug 19, 2021)

Here is a more practical test using a low effort old mock up I did of Attack Team from Final Fantasy Tactics using the performance patch from Nucleus on the Classic Mix, which is fairly dry. It won't impress anyone from a performance/mock up perspective, but it should help provide a more contextual idea of what this tool is capable of.

All IRs were "captured" using the shortest articulations I could find for each instrument group across the whole range where applicable. Interestingly, the BBCSO IRs came out with a crazy high mix compared to AROOF (the French Horn IR was 54% wet!). I turned those down to mid 20% range for the second playthrough with the BBCSO IRs in the video, so don't let the first round make up your mind right away for better/worse.

I'm sure better results could be achieved with more experienced IR capture, a deeper tweaking of the provided parameters and some clever use of Panagement/Precedence and EQ on top of the reverb. This is just a quick test using what was provided by the plugin.

Other potential applications could be grabbing IRs of individual microphones on a library/instrument group, exporting them to WAV files and balancing them in a multi-IR loader, such as Libra or Reverberate 3 to simulate microphone levels against a mixed mic or close mic of a dry library.

Links are included in the YouTube description if you want to jump around to the different takes.


----------



## Trash Panda (Aug 19, 2021)

dzilizzi said:


> I thought it did a perfect match with Majestica, really good with BBCSO, and not quite good with AR1.


Well to be fair, the AR1 snares have way more low end. I was able to get it to a pretty close sound with a Match EQ, but wanted the focus to be on the IR results.


----------



## dzilizzi (Aug 19, 2021)

Trash Panda said:


> Well to be fair, the AR1 snares have way more low end. I was able to get it to a pretty close sound with a Match EQ, but wanted the focus to be on the IR results.


Not really complaining. Just what I heard. 

Thanks for doing it.


----------



## Saxer (Aug 19, 2021)

I´m afraid of the day when my reverbs are smarter than me.


----------



## Dr.Quest (Aug 19, 2021)

Looks quite interesting. Trying the demo now.


----------



## gnapier (Aug 20, 2021)

I don’t have this plug in, but I do have a lot of their other stuff. I’ve been really impressed and pleased with them FWIW.


----------



## Soundbed (Aug 20, 2021)

gnapier said:


> I don’t have this plug in, but I do have a lot of their other stuff. I’ve been really impressed and pleased with them FWIW.


There’s a demo. 🙂




Trash Panda said:


> Here is a more practical test using a low effort old mock up I did of Attack Team from Final Fantasy Tactics using the performance patch from Nucleus on the Classic Mix, which is fairly dry. It won't impress anyone from a performance/mock up perspective, but it should help provide a more contextual idea of what this tool is capable of.
> 
> All IRs were "captured" using the shortest articulations I could find for each instrument group across the whole range where applicable. Interestingly, the BBCSO IRs came out with a crazy high mix compared to AROOF (the French Horn IR was 54% wet!). I turned those down to mid 20% range for the second playthrough with the BBCSO IRs in the video, so don't let the first round make up your mind right away for better/worse.
> 
> ...



Thanks for doing that! They sound really similar on these AirPods.


----------



## jcrosby (Aug 20, 2021)

I'm demoing this now and I have to say this is pretty spectacular. So far the profiles it's created for Metropolis Ark are almost a dead-ringer.... I.e. if I make a tree profile, then turn on only the close mics and load the tree profile the tone of the reverb is virtually perfect.

No more having to make DIY IRs that sound ok, but not nearly as good as these do...

Amazing find!!


----------



## Soundbed (Aug 20, 2021)

jcrosby said:


> I'm demoing this now and I have to say this is pretty spectacular. So far the profiles it's created for Metropolis Ark are almost a dead-ringer.... I.e. if I make a tree profile, then turn on only the close mics and load the tree profile the tone of the reverb is virtually perfect.
> 
> No more having to make DIY IRs that sound ok, but not nearly as good as these do...
> 
> Amazing find!!


Ooh, I’m getting excited. Looking forward to getting into the studio.

*which source sound(s) did you use in Ark?

*Did you use the “built in” reverb or export an IR to use for (external) convolution?


----------



## jcrosby (Aug 20, 2021)

Soundbed said:


> Ooh, I’m getting excited. Looking forward to getting into the studio.
> 
> *which source sound(s) did you use in Ark?
> 
> *Did you use the “built in” reverb or export an IR to use for (external) convolution?


So far I've tried loading both string patches into one instance and then having it analyze that as a general string reverb. It's a bit duller, just a dB or two on the treble boost and it sounds pretty darn close.

I did the same for the brass... Now I'm going to try each brass instrument, and see how well it does placing each of Jaeger's dry brass instrument mics at similar depths. Knock on wood this will create a real enough sounding version of each instruments' depth. If I get good results I'll do a screen recording and post the video here...

Also, I've been using short patches for everything. Seems like the best candidate for analyzing the time correctly... I've also been trying to feed it the full range of an instrument by playing short notes in octaves on every quarter, ascending two to four semitones (if that makes sense...)

I took a similar approach with the full brass by loading each instrument into its own kontakt instance and creating a full range clip for each channel, then putting chameleon on the group and having it analyze the summed mix...


----------



## Soundbed (Aug 20, 2021)

jcrosby said:


> So far I've tried loading both string patches into one instance and then having it analyze that as a general string reverb. It's a bit duller, just a dB or two on the treble boost and it sounds pretty darn close.
> 
> I did the same for the brass... Now I'm going to try each brass instrument, and see how well it does placing each of Jaeger's dry brass instrument mics at similar depths. Knock on wood this will create a real enough sounding version of each instruments' depth. If I get good results I'll do a screen recording and post the video here...
> 
> ...


Awesome!! It occurs to me to wonder … does the demo let someone export the IR? If so then people might never buy it. (I wouldn’t do that nor do I condone it.)


----------



## Trash Panda (Aug 20, 2021)

Soundbed said:


> Awesome!! It occurs to me to wonder … does the demo let someone export the IR? If so then people might never buy it. (I wouldn’t do that nor do I condone it.)


Nope. You can make as many IRs as you want, but cannot export them unless you purchase.


----------



## derschoenekarsten (Aug 20, 2021)

Soundbed said:


> Awesome!! It occurs to me to wonder … does the demo let someone export the IR? If so then people might never buy it. (I wouldn’t do that nor do I condone it.)





Trash Panda said:


> Nope. You can make as many IRs as you want, but cannot export them unless you purchase.


I just exported an IR from the demo and tested in Reverberate (OSX, Logic). Pretty sure that won't keep me from buying though


----------



## Trash Panda (Aug 20, 2021)

derschoenekarsten said:


> I just exported an IR from the demo and tested in Reverberate (OSX, Logic). Pretty sure that won't keep me from buying though


*!*

I swear, it wasn't allowing this yesterday, but now it's working. Will also definitely be purchasing, but I have a lot of IR capturing to do for the next few days.


----------



## Trash Panda (Aug 20, 2021)

Bizarro. On my laptop, I cannot export the IRs in trial mode. On my desktop I can.


----------



## givemenoughrope (Aug 20, 2021)

jcrosby said:


> I'm demoing this now and I have to say this is pretty spectacular. So far the profiles it's created for Metropolis Ark are almost a dead-ringer.... I.e. if I make a tree profile, then turn on only the close mics and load the tree profile the tone of the reverb is virtually perfect.
> 
> No more having to make DIY IRs that sound ok, but not nearly as good as these do...
> 
> Amazing find!!


I wonder how well it could make drier libraries (8dio strings) sit with the wetter stuff like SCS


----------



## Zedcars (Aug 20, 2021)

Can we finally get a/some BBCSO IR(s) with this?!


----------



## derschoenekarsten (Aug 21, 2021)

After two days of demoing I'm even more impressed than I initially was. Shoutouts to @jcrosby for the idea with the single mics/multi IR loader. 

I also tried capturing IRs from song where I like the reverb in particular. Results are not as close as with single tracks, but the IRs do a good job of capturing the overall vibe. Currently I have a choir singing in an approximation of the staircase from "When the Levee Breaks" 

Does anybody have an idea regarding the legal ramifications of sharing captured IRs? Building a collection seems like a great community project, but I'm somewhat hesitant.


----------



## ToadsworthLP (Aug 21, 2021)

derschoenekarsten said:


> After two days of demoing I'm even more impressed than I initially was. Shoutouts to @jcrosby for the idea with the single mics/multi IR loader.
> 
> I also tried capturing IRs from song where I like the reverb in particular. Results are not as close as with single tracks, but the IRs do a good job of capturing the overall vibe. Currently I have a choir singing in an approximation of the staircase from "When the Levee Breaks"
> 
> Does anybody have an idea regarding the legal ramifications of sharing captured IRs? Building a collection seems like a great community project, but I'm somewhat hesitant.


A community collection of IRs sounds like a great idea! Also, I'm not a legal expert, but I don't think there'd be much of a difference to sharing settings of an algorithmic reverb or an IR recorded from a hardware reverb unit, which should theoretically be fine. So unless the plugin itself has a clause prohibiting the distribution of IRs made using it, it should be ok? Personally, I don't think that they'd send the police after anyone if the IR files aren't sold for profit, but again, I'm not a legal expert.


----------



## muk (Aug 21, 2021)

Trash Panda said:


> Bizarro. On my laptop, I cannot export the IRs in trial mode. On my desktop I can.


Odd. For me exort doesn't work. It has a specific message pop up that IR export is not supported in trial mode.


----------



## ToadsworthLP (Aug 21, 2021)

Just out of curiosity, could someone try putting CSS into Air Studios with a SSS IR? I really don't like the out-of-the-box Trackdown sound and I'd love to know what this plugin is really capable of.


----------



## jcrosby (Aug 21, 2021)

Did some screen recording showing the results I got yesterday....

All of the reverbs I enable/disable were extracted from Metropolis Ark 1 using staccatos for the brass & choir, bartoks and staccato for the strings, and a stack of a bunch of percussion for percussion... All the reverb you hear in here is Chameleon, nothing else.

You'll notice, most of the sources are dry. Some are basically dead-dry... A few of the AI brass patches use a very small amount of the decca as the dry low brass and 12 horns sound too wimpy without them. But you'll notice they still sound pretty dead dry and the AI brass trees still sound quite close.

One other thing to keep in my is that I used precedence to stage things so even the dry sounds heave a sense of depth... I also used my 2nd new favorite toy, Expanse 3D to help pusn some things further back. (E3D doesn't do this with reverb, again all of the actual tails and heft of the reverb comes from what Chameleon learned from what I fed it...)

I also played around with the stretch feature. you can make some really huge hit and boom reverbs by stretching any of the reverbs to 100%.

Keep in mind this isn't a finished piece of music by any standard... My CCs and velocities are pretty lazy in this, the programming and harmony are wonky... This was just to test how chameleon does at extracting the reverb from Metropolia Ark... (Air Studios your next!! )


FYI I bought Chameleon. IMO this is an absolute no brainer. Nothing else does what this does so far (at least that I'm aware of)! Hope this helps for anyone thinking about it...


*UGHH. The Video won't attach. Dropbox Link Below....*

https://www.dropbox.com/s/oxeququbvs8amcw/Chameleon%20%27Learned%27%20Metropolis%20Ark%20Reverbs%20HB.mp4?dl=0


----------



## jcrosby (Aug 21, 2021)

ToadsworthLP said:


> Just out of curiosity, could someone try putting CSS into Air Studios with a SSS IR? I really don't like the out-of-the-box Trackdown sound and I'd love to know what this plugin is really capable of.


See the video above. I only used the close mics in Areia which are pretty much dead-dry. I _put_ them in Teledex, the before after when all strings are playing with Chamaleon on is anything but subtle. I think it should give you a clear impression that you can totally achieve what you're after. 

(Download the demo and test yourself though. The compressed audio is this video doesn't do justice to how much nicer the difference is.)


----------



## Trash Panda (Aug 21, 2021)

Did you use the default mix values or tweak them to taste? Can’t watch the video until I’m home.


----------



## averystemmler (Aug 21, 2021)

I just did some testing, because this seems like a fantastic case for A.I. tools.

So far, it looks to me like it's mainly measuring the length and tonal character of the input over that length, and the profile it creates seems to basically be shaped noise. It doesn't look like it regards the individual reflections, or even the overall pattern of them; it captures the spectral quality of a space over time, but none of the taps, per se.

I fed Chameleon a variety of musical content, and test signals (both impulses and dry musical content) through a few different simple sets of early reflections as well as algorithmic and convolution reverbs with noticeable reflections. The resulting profiles/IRs all instead had a relatively smooth decay at the approximate RT60 of the input, and didn't reproduce the reflections at all (except via the pre-delay parameter, in some cases). The main differences between profiles were in the decays at different frequencies, and it does a pretty good job of interpreting that. As a test, I fed it a hammered harp playing musically through FabFilter's Pro R with a steep dip in the "decay rate EQ" at 1.3k, and the resulting profile in Chameleon represented that band's decay accurately. I think it also captured better when there was some dry signal in there too. Which makes sense, considering its purpose.

I also tried inverting the curve of these reflections (the "reverse" effect) to see if it captured the overall shape, but the result was still a typical, decaying tail. To be fair, it's only designed for natural spaces.

Overall, when fed musical content recorded in real spaces, I'm pretty surprised at how effective the profiles can be in context, despite its complete disregard for reflections. In fact, the lack of individual reflections might also make it much more appropriate for applying overtop of already spatial sources, since there's less risk of comb filtering or other weird "room within a room" effects. I'm looking forward to poking at it some more.


----------



## jcrosby (Aug 21, 2021)

Trash Panda said:


> Did you use the default mix values or tweak them to taste? Can’t watch the video until I’m home.


I made IRs of every mic. (What can I say I'm OCD like that.) I made IRs of the default mixes too.

The method I've tried so far was this: I loaded all instruments form each section and payed shorts in octaves with a little harmony to make sure I captured the whole range... I spaced out the notes so there were 3 quarter notes that filled up the whole range and the tail of the last note was captured completely... I captured each instruments' mics for discrete verbs for each brass/strings instrument. I then summed all of the brass/strings to their own groups and made 'buss' verbs for brass and strings.

Choirs I just summed them together... At some point I'll probably do F/M separately but the summed choir mix does the job fine...

Last thing I did was put chameleon on the mixbus and capture everything playing at once for a 'snapshot' of the entire Teldex room... In case I want to send a little bit of an entire mix into the same space.

Also in the video I'm only using the "buss" reverbs. TL;DR: These sound great as is, I'd imagine splitting each instrument to its own position will sound even better...


----------



## jcrosby (Aug 21, 2021)

averystemmler said:


> I just did some testing, because this seems like a fantastic case for A.I. tools.
> 
> So far, it looks to me like it's mainly measuring the length and tonal character of the input over that length, and the profile it creates seems to basically be shaped noise. It doesn't look like it regards the individual reflections, or even the overall pattern of them; it captures the spectral quality of a space over time, but none of the taps, per se.
> 
> ...



Interesting... Are you saying you're not getting connotation of depth? I've found that when I feed close, basically dead dry mics into most of the reverbs I've captured and set the mix to 100% the impression of depth is definitely there....

I agree though, I don't think it's learning and deconvolving the actual reverb, but synthesizing it in some way. In terms of legality at least this would mean that creating a virtual version of AR, Teldex, etc shouldn't be a gray area.


----------



## averystemmler (Aug 21, 2021)

jcrosby said:


> Interesting... Are you saying you're not getting connotation of depth? I've found that when I feed close, basically dead dry mics into most of the reverbs I've captured and set the mix to 100% the impression of depth is definitely there....


I haven't tested it enough yet musically to have an opinion, but I don't hear the "walls" of a real space. I'm not sure that you need that to perceive depth, really, but there's something missing (for better or worse) from the original source.

I had a look in Plugin Doctor too, and the impulse response readout for all Chameleon profiles I made basically look like the tail component of a normal algorithmic reverb - a dense decaying wash without many noticeable reflections or clear (to the eye) patterns. Generally, an algo reverb with an ER component, or an impulse recorded in a real space, will have at least a few clearly visible reflections poking out early on, that eventually meld into the diffuse tail. But of course, visuals don't mean too much in practice. An impulse response of the classic Random Hall looks like gibberish to me, but sounds fantastic.

I did also notice that all the profiles I made roughly followed a "stepped" decay (e.g. "Positive Tap Slope" on page 29 of the https://lexiconpro.com/en/product_documents/pcm_native_room_manualpdf (Lexicon PCM Native manual)), which was especially noticeable when I fed it the simpler "ER only" signals. I tried some very low density exponential, bell, and inverse bell patterns from B2, and Chameleon interpreted all of them with the same falling, stepped pattern, just with slightly differing RT60 lengths (it seemed to interpret the bell as having ended when it reached its peak midway). It definitely looks like it's measuring certain parameters and synthesizing, rather than deconvolving.

Interestingly, one of the exponential patterns I tried confused it and created an impulse with some wild back-and-forth spectral sweeps. I'm not sure what to make of that, but it does make me think it's doing more than just figuring out a decay per frequency band. Or maybe it was just a glitch.

I probably should have done this first, but I just looked at the user manual too, and found this:

_"Internally, artificial neural networks are being used to estimate the length and the frequency-depended decay of the reverb tail. In the development process more than 30.000 different reverb conditions have been used to let the algorithm learn how to handle and imitate different recording scenarios."_

I'd love to know more about those conditions they trained it with. It'd be fascinating to hear one that was only trained on scoring stages, another that was trained on cathedrals, another on small rooms, chambers, plates, etc.

Definitely a unique tool, regardless, and I hope the tech keeps evolving.


----------



## jcrosby (Aug 21, 2021)

averystemmler said:


> I haven't tested it enough yet musically to have an opinion, but I don't hear the "walls" of a real space. I'm not sure that you need that to perceive depth, really, but there's something missing (for better or worse) from the original source.
> 
> I had a look in Plugin Doctor too, and the impulse response readout for all Chameleon profiles I made basically look like the tail component of a normal algorithmic reverb - a dense decaying wash without many noticeable reflections or clear (to the eye) patterns. Generally, an algo reverb with an ER component, or an impulse recorded in a real space, will have at least a few clearly visible reflections poking out early on, that eventually meld into the diffuse tail. But of course, visuals don't mean too much in practice. An impulse response of the classic Random Hall looks like gibberish to me, but sounds fantastic.
> 
> ...


Interesting. That does make sense I suppose, I'd imagine calculating an ER would be much trickier... 

I'm going to message the developer about this. Hopefully, given that this is only v1 they have a longer term vision in mind of eventually being able to mimic the reflection patterns, even if done via shaped noise... (That said I've only read about noise theoretically being used as a reverb tail so I have no idea how feasible it may or may not be).

I also listened to some of the IRs as audio an it does sound like it could be shaped noise... That said it really does a smash up job of imitating the tone and decay of a space. It certainly fools my ear enough to have no regrets about buying it...

Cheers and great sleuthing


----------



## averystemmler (Aug 21, 2021)

jcrosby said:


> Interesting. That does make sense I suppose, I'd imagine calculating an ER would be much trickier...
> 
> I'm going to message the developer about this. Hopefully, given that this is only v1 they have a longer term vision in mind of eventually being able to mimic the reflection patterns, even if done via shaped noise... (That said I've only read about noise theoretically being used as a reverb tail so I have no idea how feasible it may or may not be).
> 
> ...


I'm leaning towards picking it up too! I don't see the lack of clear reflections as a negative in this case, just something to be aware of. I think it's impressive that they're able to pull everything they do out of whatever chaotic source material we throw at it.

As far as "shaped noise" goes, this is basically what I was suggesting might be going on - just programmatically, based on the input:









Create Idealized Impulse Responses for Convolution Reverbs - inSync


And not just any impulses — we’re talking impulses that create sustaining, smooth reverbs with almost... Read more »




www.sweetwater.com





The result may not have all of the psychoacoustic cues of real room, but it avoids a lot of the pitfalls too.


----------



## jcrosby (Aug 21, 2021)

averystemmler said:


> I'm leaning towards picking it up too! I don't see the lack of clear reflections as a negative in this case, just something to be aware of. I think it's impressive that they're able to pull everything they do out of whatever chaotic source material we throw at it.
> 
> As far as "shaped noise" goes, this is basically what I was suggesting might be going on - just programmatically, based on the input:
> 
> ...


I don't see the lack of reflections being bad either. If anything it's a good thing in that it won't actually interfere with any baked in reflections of a given library... It seems like this is kind of perfect for scoring scenarios where where you want instruments to _sound_ like they're in the same space, but don't want to overlay another set of reflections that might mess with some of acoustic information that winds up giving a library some of its character....

That also explains why the I got the most out of Jaeger's brass by leaving its decca on... Removing the tree from the mix killed the heft from each instrument, and while Chameleon does overlay a decent amount of character it still wasn't nearly as rich as the actual samples... (Obvioulsy)... I really like the result of the combination of the two.

Reflections would definitely be cool though... I've messaged them, hopefully they'll confirm that it's in their road map... Great plugin overall. As you said, it's an excellent example of where AI can be harnessed to do something really useful that wasn't possible until now...


----------



## XComposer (Aug 22, 2021)

I'm going to buy it, too! Great tool for me! According to a test of mine (an IR recorded in a small chapel), another small issue is that some spaces resonate at specific narrow frequencies bands (they have formants, speaking about their acoustics): I observed that Chameleon tends to get rid of those narrow resonance peaks and to distribute the frequency profile more evenly (which is often good for the sound); from this point of view, it's an interesting idea to use it together with some match EQ (not fully mixed in, just a little) if one really wants to come even closer to the original sound. The built-in EQ is not suited to setting resonance peaks at specific frequencies. In my case, the results obtained in this way (Chameleon + match EQ) were almost perfect.


----------



## jcrosby (Aug 22, 2021)

The developer got back to me, @averystemmler was totally right... Here's what he said, which explains the logic behind not including ERs. (He did say he's open to it though...)

_Regarding the early reflections you are right. Currently, the plugin only focuses on the diffuse reverb tail. Early reflections are on the one hand quite difficult to estimate accurately and on the other hand oftentimes also did more harm than good in our ADR testings (phase issues etc.). However, for your use-case it sounds a bit like a different story. We will look into it and see if we can integrate an early reflections functionality in the future!_

Interestingly I tried adding some ERs with Pro-R and a few IRs. If the reverb generating the ER wasn't 100% wet there were indeed some quite audible nasty phasing issues...

He's also happy to hear that the composing crowd's found a unique/somewhat unexpected use for Chameleon... The OP discovered a real gem in this plugin 

_This sounds really cool. Originally Chameleon was designed only for speech and targeted towards ADR and foley production. Awesome to hear that it is also useful for composers._


----------



## XComposer (Aug 22, 2021)

It's very useful for composers. In my case, I receive recordings of my music from various locations (and therefore various spaces, from theaters to dry rooms) – and they can be different movements of the same piece or even different takes of the same movement or passage – and I standardize and even out their quality in my own studio, or re-record some passages here, before I upload them or get them published. 
Or I explore more creative exchanges of reverb features in my electroacoustic music. 
This is a very useful tool for all this!


----------



## odod (Aug 22, 2021)

how about smartreverb from sonible?


----------



## Trash Panda (Aug 22, 2021)

odod said:


> how about smartreverb from sonible?


Not really the same thing at all as this. Smart Reverb tries to create a reverb profile based on incoming signal. This tries to recreate an existing room’s reverb profile using audio.


----------



## Trash Panda (Aug 23, 2021)

So…who wants to trade some Teldex profiles in exchange for AR1?


----------



## szczaw (Aug 23, 2021)

Solo dry EW flutes , with AR1 ir, and with AR1 strings. Are we in the same space ?

View attachment ar1 ew.mp3


----------



## ToadsworthLP (Aug 24, 2021)

Hmm, doesn't seem to run in Studio One 5.3. Any chance of a community-curated collection of IRs so people who don't or can't have it can join the fun too? They don't seem to prohibit sharing generated and exported IR files in their license agreement in the manual (disclaimer: I am not a legal expert).


----------



## dzilizzi (Aug 24, 2021)

ToadsworthLP said:


> Hmm, doesn't seem to run in Studio One 5.3. Any chance of a community-curated collection of IRs so people who don't or can't have it can join the fun too? They don't seem to prohibit sharing generated and exported IR files in their license agreement in the manual (disclaimer: I am not a legal expert).


They will have to be careful with this. Air doesn't allow the sharing of IR's made from libraries recorded at Air. I think it is in the EULA. At least every time someone talks about it, this gets brought up. I don't know about Abbey Road. I do think sharing the Maida Vale one would be acceptable, as it is no longer a recording studio from what I understand. The building is still there, but I believe all the equipment has been removed? At least that is what I understood from something said by someone at Spitfire. 

So Chameleon isn't working in Studio One? That could be a problem.


----------



## Soundbed (Aug 24, 2021)

jcrosby said:


> Did some screen recording showing the results I got yesterday....
> 
> https://www.dropbox.com/s/oxeququbvs8amcw/Chameleon%20%27Learned%27%20Metropolis%20Ark%20Reverbs%20HB.mp4?dl=0


I think that's really impressive. Thank you for sharing!



averystemmler said:


> I haven't tested it enough yet musically to have an opinion, but I don't hear the "walls" of a real space.


Yeah it might be good (or at least ok) that those ERs aren't in there, because some will almost certainly be in the "target" audio ... unless it's sample modeling and / or was recorded in an anechoic space, right?



averystemmler said:


> It'd be fascinating to hear one that was only trained on scoring stages, another that was trained on cathedrals, another on small rooms, chambers, plates, etc.


I'm imagining it was a lot of rooms in houses or offices, hotels, cars, bathrooms of various sizes, warehouses, alleyways, hallways, elevators.


----------



## ProfoundSilence (Aug 24, 2021)

Haven't played around with it too much but I like it


----------



## ProfoundSilence (Aug 24, 2021)

here's some toying around simulating microphones and using aux sends.


----------



## icecoolpool (Aug 25, 2021)

dzilizzi said:


> They will have to be careful with this. Air doesn't allow the sharing of IR's made from libraries recorded at Air. I think it is in the EULA. At least every time someone talks about it, this gets brought up. I don't know about Abbey Road. I do think sharing the Maida Vale one would be acceptable, as it is no longer a recording studio from what I understand. The building is still there, but I believe all the equipment has been removed? At least that is what I understood from something said by someone at Spitfire.
> 
> So Chameleon isn't working in Studio One? That could be a problem.


It´s not an IR made using samples of Spitfire products. It´s an algorithmic simulation of the reverb tail with no early reflections so is in no way a violation of the EULA. In reality, it´s no different to programming your own space in the reverb plugin of your choice to match a specific space. The only difference is that this is AI automated.


----------



## ToadsworthLP (Aug 25, 2021)

icecoolpool said:


> It´s not an IR made using samples of Spitfire products. It´s an algorithmic simulation of the reverb tail with no early reflections so is in no way a violation of the EULA. In reality, it´s no different to programming your own space in the reverb plugin of your choice to match a specific space. The only difference is that this is AI automated.


That's what I meant, it's basically just like an automated version of tweaking Pro-R settings for days until it sounds kinda close to the real thing, then exporting and sharing that preset, at least in my opinion. Sharing generated IRs wouldn't be distributing Spitfire's recordings, just some freshly synthesized shaped noise based on their general sound. (Disclaimer: still no legal expert)


----------



## dzilizzi (Aug 25, 2021)

ToadsworthLP said:


> That's what I meant, it's basically just like an automated version of tweaking Pro-R settings for days until it sounds kinda close to the real thing, then exporting and sharing that preset, at least in my opinion. Sharing generated IRs wouldn't be distributing Spitfire's recordings, just some freshly synthesized shaped noise based on their general sound. (Disclaimer: still no legal expert)


You are still creating an IR from the Air recordings. Even if you did this yourself, it is against the EULA, from what I understand. Of course, they cannot say you can't do it for personal use, you just can't distribute it. There have been many who have reprogrammed the IR for Air and others have wanted them to share, that's why I know this has come up previously.


----------



## Hans-Peter (Aug 25, 2021)

dzilizzi said:


> You are still creating an IR from the Air recordings. Even if you did this yourself, it is against the EULA, from what I understand. Of course, they cannot say you can't do it for personal use, you just can't distribute it. There have been many who have reprogrammed the IR for Air and others have wanted them to share, that's why I know this has come up previously.


If this were the case, Air or AR would have to have protected their signature sound, which in current legislation is not possible, to my knowledge. What is not allowed is to market/name any parameter/reverb approximations as Air or AR IRs, specifically. These are registered brands and, hence, using their names requires prior approval by the rights holders.

For the same reason there is a difference in stating "recorded at AR Studio 2" (location) and "recorded by AR Studios" (institution). You encounter similar legal considerations when dealing with microphone emulations.

However, the previous comments are right that Chameleon does not copy but approximate parameters within the framework of a synthesized reverb. More so, significant aspects, such as the ER, are missing; not to mention positioning. There is no way that such an approximation would entail any legal repercussions. It's like being forbidden to discuss the estimated RT60 of these spaces. Actually, doing an EQ-Match could be considered more problematic than what Chameleon does - and even that would fall within the outlined argumentation of discussion/parameterisation.

However, if you were to create an IR directly from the actual recording (as in combining percussion to create an impulse), well, that would be another story and potentially violate the terms of use. But that's not the case with Chameleon. In fact, it does neither use nor manipulate (as in sampling) the recording at all; rather it just listens to it. This falls within the legal ramifications of listening. Nothing more, nothing less. Just make sure to consider the designation of the files and you should be good to go.

Apart from that, I don't see much of a point in sharing files as the tail may vary depending on instrument and positioning - turning this into a highly individual issue. But I could be wrong about that (certainly, the tail is more invariant than the earlier components). In any case, Chameleon appears to be a helpful tool, but obviously won't get you the sound of Air/AR as that is a highly complex interaction between microphones, positioning, signal path, and so on. It will only help you in mixing samples recorded there with recordings from different locations. Thus, the intention and use of a reverb created with Chameleon is to mix recordings from different sources. And that's not exactly a rare thing to do ... mixing ...  ... and perfectly within the terms of use. Otherwise, these spaces would have to forbid mixing their libraries with other 3rd-party libraries - an exclusive use of library clause *lol*.

JM2C.


----------



## Soundbed (Aug 25, 2021)

ProfoundSilence said:


> here's some toying around simulating microphones and using aux sends.



Sounds very promising!!


----------



## Trash Panda (Aug 25, 2021)

For anyone who has exported IRs from Chameleon into a separate IR loader, are you finding you have to drastically adjust the settings to get a similar sound as Chameleon?

At minimum, it seems like I need to turn the reverb level in Reverberate from -20 dB down to -35 dB and the master down by about -2 dB.


----------



## derschoenekarsten (Aug 25, 2021)

Alright fellas. *Here* you'll find four Chameleon IRs, two sets of two from fancy "rooms" people have been asking for (at least until _Aug 31; _is there a way to include files in a post here???). Hope some of y'all have fun with them.

As *@Hans-Peter *rightfully points out, different instruments yield different IRs and positioning isn't reflected. Both sets are from percussive sources, as I find those work best as "generalist" sources.

*@Trash Panda* /all: They're loud AF. I usually turn the level down by ≈ -25 to -30 dB in Reverberate (targeting 0 dBVU on the mixbus before any processing for the whole piece).

-----

IMO the customization features of the plug-in have been a little glossed-over. The Decay, Stretch, Treble, Bass, and Pre-Delay controls all cause a recalculation of the IR. As the plug-in is rather light on CPU, this allows for a really wide variety that exported IRs can hardly reflect.


----------



## Trash Panda (Aug 25, 2021)

Interesting. So you're approaching this as a single "room glue" type reverb for the entire orchestra instead of loading it on each individual instrument?

Thanks for confirming Reverberate settings.



derschoenekarsten said:


> IMO the customization features of the plug-in have been a little glossed-over. The Decay, Stretch, Treble, Bass, and Pre-Delay controls all cause a recalculation of the IR. As the plug-in is rather light on CPU, this allows for a really wide variety that exported IRs can hardly reflect.


Agreed. The presets it creates sound very natural without much, if any, tweaking. But if the desire is there to tweak, you can get some really cool results with what is built in. Can't wait for pay day to arrive to pick this up.


----------



## jcrosby (Aug 25, 2021)

Trash Panda said:


> For anyone who has exported IRs from Chameleon into a separate IR loader, are you finding you have to drastically adjust the settings to get a similar sound as Chameleon?
> 
> At minimum, it seems like I need to turn the reverb level in Reverberate from -20 dB down to -35 dB and the master down by about -2 dB.


Pretty sure there was a bug in the version we downloaded last week where you had to save the preset before exporting an IR if you used any of Chameleon's EQs and filters to process the IR. They released a new version yesterday, see if that fixes it. Also are you exporting the IR fully wet? (Imagine you shouldn't have to but worth checking...)

Either way the last few IRs I exported were identical to Chameleon. I AB'd each for a few minutes to be sure and they sounded the same. I did have to play around for a sec the get levels to be more or less the same.

Level-wise that may very well be due to your IR reverb. I know Space Designer normalizes any IR you put into it, pretty sure many IR reverbs do as a way to keep levels consistent... I.e. if you had IRs at different levels in theory one IR from one source might be too quiet, another might nearly take your head off if it was normalized to full scale for example....


----------



## Jay Panikkar (Aug 27, 2021)

So this is how Skynet goes online. Take control of reverb to purge humanity. 

R.I.P. fellow humans.


----------



## averystemmler (Aug 27, 2021)

Jay Panikkar said:


> So this is how Skynet goes online. Take control of reverb to purge humanity.
> 
> R.I.P. fellow humans.


Ah well. We had a good run.


----------



## Dietz (Aug 28, 2021)

I must be doing something wrong, obviously ... 8-/

I'm feeding a demo version of Chameleon with band-limited reverbs excited by noise-bursts. Regardless of whether reverb is high-cut sharply at 1000 Hz or low-cut at the same frequency, the resulting IRs are more or less the same. The re-constructed reverbs sounds nice (like coming from a fully de-correlated pink-noise sample, although quite bass-heavy), but not even remotely like the original reverb tail.

Reverb-inherent panning and/or stereo width appear to be arbitrary in the resulting IRs, even with the option "Force Centered Stereo Image" disabled during the analysis-process. Width is guessed and adjusted in real-time in the plug-in's "Customisation"-section, though.

I wanted so very much that this actually works! *sigh* ... might be that I'm just expecting something different ...


----------



## averystemmler (Aug 28, 2021)

Dietz said:


> I must be doing something wrong, obviously ... 8-/
> 
> I'm feeding a demo version of Chameleon with band-limited reverbs excited by noise-bursts. Regardless of whether reverb is high-cut sharply at 1000 Hz or low-cut at the same frequency, the resulting IRs are more or less the same. The re-constructed reverbs sounds nice (like coming from a fully de-correlated pink-noise sample, although quite bass-heavy), but not even remotely like the original reverb tail.
> 
> ...


I've found that the result is a little closer to what you might expect when feeding it a less "clinical" source - i.e., when calibrating Chameleon to actual music, or to samples recorded in a space. I suspect that the A.I. was trained as such, and therefore isn't equipped to interpret less natural sources. It is designed and primarily marketed as an ADR tool, after all.

There are consistencies between the profiles I've created from (for instance) Teldex samples, compared to Air Lyndhurst samples, so it is managing to establish _something_ about the decay of the space. But the output is far from an IR of the actual room, and lacks discrete reflections entirely. The width especially, as you've pointed out, seems to be entirely guessed and simulated via the width knob (which may just control a decorrelation factor or something between the channels? You'd know better than I.)

Definitely more of an "impression" of a space than a recreation of one.


----------



## ProfoundSilence (Aug 28, 2021)

I think this was designed with acoustic material in mind


----------



## Dietz (Aug 28, 2021)

averystemmler said:


> Definitely more of an "impression" of a space than a recreation of one.


This is what it boils down to.


----------



## Leandro Gardini (Aug 28, 2021)

Based on the comments, I was expecting the plugin to work wonders, but the results of my tests don't work like a charm. 
The plugin sometimes gets a close print of the room, but it is far from an accurate matching in most cases. 
It is necessary to carefully turn the knobs to get as close as possible.


----------



## jcrosby (Aug 28, 2021)

Dietz said:


> I must be doing something wrong, obviously ... 8-/
> 
> I'm feeding a demo version of Chameleon with band-limited reverbs excited by noise-bursts. Regardless of whether reverb is high-cut sharply at 1000 Hz or low-cut at the same frequency, the resulting IRs are more or less the same. The re-constructed reverbs sounds nice (like coming from a fully de-correlated pink-noise sample, although quite bass-heavy), but not even remotely like the original reverb tail.
> 
> ...


The Teldex presets I've made so far sound pretty accurate compared to the tail used to create them. When I run the same mic source(s) I fed into the preset back through the reverb again, it sounds like I've essentially just extended the length of a decay knob. Basically the tone of the reverb seems quite close to what was fed into it.

My hunch is that a big part of Chameleon's ML routine involves tonal recognition, and the ability to separate that 'close' tonal information from the reverberant tail. If that were the case I'd imagine band limited noise could potentially confuse the algorithm...

AI's quirky like that. It can do a routine you teach it incredibly well, but can fail miserably when you present it with an edge case... No idea for sure, but that's my guess...

You should also reach out to the developer, even if solely out of curiosity. We've had a decent back and forth over the past week. He wasn't expecting the use case that evolved out of this thread, but despite it being designed for ADR he's eager to explore its use in musical (and perhaps other) applications... He's already indicated they are going to experiment with ERs and see how it goes despite initial tests resulting in phase issues... Basically he seems quite open to feedback about how it can be improved...


----------



## Dietz (Aug 28, 2021)

jcrosby said:


> If that were the case I'd imagine band limited noise could potentially confuse the algorithm...


Well, it's basically the _reverb_ that was band limited, and I fed it with noise to make sure that the very obvious spectrum of the tail would be clearly visible. After all, the frequency spectrum (actually its development over time) is a decisive part of a reverb's overall sonic impression, isn't it? Tail length and width alone won't tell the whole story.

... but anyway, don't get me wrong, I don't want to badmouth a potentially revolutionary product.


----------



## Soundbed (Aug 28, 2021)

Dietz said:


> I must be doing something wrong, obviously ... 8-/
> 
> I'm feeding a demo version of Chameleon with band-limited reverbs excited by noise-bursts. Regardless of whether reverb is high-cut sharply at 1000 Hz or low-cut at the same frequency, the resulting IRs are more or less the same. The re-constructed reverbs sounds nice (like coming from a fully de-correlated pink-noise sample, although quite bass-heavy), but not even remotely like the original reverb tail.
> 
> ...





Dietz said:


> Well, it's basically the _reverb_ that was band limited, and I fed it with noise to make sure that the very obvious spectrum of the tail would be clearly visible. After all, the frequency spectrum (actually its development over time) is a decisive part of a reverb's overall sonic impression, isn't it? Tail length and width alone won't tell the whole story.
> 
> ... but anyway, don't get me wrong, I don't want to badmouth a potentially revolutionary product.


Did you try sending it some regular signal? The machine learning model was likely trained for voices speaking in various spaces.


----------



## Dietz (Aug 29, 2021)

My simple expectation would have been that the expected result should be easier to achieve when there aren't several unknown variables (unknown input signal and its unknown behaviour in an unknown the room), but just one (the unknown room).

But like I wrote above: My definition of "space" as well as my actual needs might be different.


----------



## Trash Panda (Sep 1, 2021)

Picked this up today. Going to see how well it can extract the Teldex tail from a Metropolis walkthrough video tomorrow.


----------



## Soundbed (Sep 3, 2021)

Four more days of the intro sale. I know a couple people bought this. Any new thoughts to share?


----------



## dzilizzi (Sep 3, 2021)

Soundbed said:


> Four more days of the intro sale. I know a couple people bought this. Any new thoughts to share?


I'm also interested. Though with all my effects, I can probably do what they are doing (matching the tail,) I'm trying to decide if it is worth it for the ease of use.


----------



## Trash Panda (Sep 3, 2021)

Soundbed said:


> Four more days of the intro sale. I know a couple people bought this. Any new thoughts to share?


Well, it completely trivializes the effort required to make different libraries sound like they’re in the same space without degrading the quality of the samples being blended into the room. Not sure what more people could want.


----------



## szczaw (Sep 3, 2021)

Trash Panda said:


> Well, it completely trivializes the effort required to make different libraries sound like they’re in the same space without degrading the quality of the samples being blended into the room. Not sure what more people could want.


I want a bigger discount.


----------



## Trash Panda (Sep 3, 2021)

szczaw said:


> I want a bigger discount.


Ok, that’s fair.


----------



## jcrosby (Sep 3, 2021)

Soundbed said:


> Four more days of the intro sale. I know a couple people bought this. Any new thoughts to share?


I'm still very happy I bought it. Unlike somethings you inevitably impulse buy and don't actually find much use for after the fact, I personally have been using this daily on all kinds of sources. I've been using it on all orchestral sections and percussion in every cue I've worked on over the past week+... The reverbs also repurpose nicely for sound design...

It's very light on CPU, even in zero latency mode. IR export's working great, and although I haven't needed it too much it has been nice to be able to bounce an IR and reshape the attack in another plugin... Most of them time I just stick with the actual plugin. Overall I'm glad I bought it...


----------



## Soundbed (Sep 3, 2021)

jcrosby said:


> I'm still very happy I bought it. Unlike somethings you inevitably impulse buy and don't actually find much use for after the fact, I personally have been using this daily on all kinds of sources. I've been using it on all orchestral sections and percussion in every cue I've worked on over the past week+... The reverbs also repurpose nicely for sound design...
> 
> It's very light on CPU, even in zero latency mode. IR export's working great, and although I haven't needed it too much it has been nice to be able to bounce an IR and reshape the attack in another plugin... Most of them time I just stick with the actual plugin. Overall I'm glad I bought it...


wow great endorsement thank you


----------



## Soundbed (Sep 4, 2021)




----------



## XComposer (Sep 8, 2021)

Dietz said:


> I must be doing something wrong, obviously ... 8-/
> 
> I'm feeding a demo version of Chameleon with band-limited reverbs excited by noise-bursts. Regardless of whether reverb is high-cut sharply at 1000 Hz or low-cut at the same frequency, the resulting IRs are more or less the same. The re-constructed reverbs sounds nice (like coming from a fully de-correlated pink-noise sample, although quite bass-heavy), but not even remotely like the original reverb tail.
> 
> ...


I think that this is paradoxically a sign that Chameleon works… I'll try to explain. I think that one of its main tasks is to guess the correct frequency slope on the whole frequency range; in other words, the decay time for each frequency in the audible range. If you make it listen to, say, a flute, it will have very few indications about the decay time for the low frequencies; on the other hand, if you make it listen to a contrabassoon, for example, it will be hard for it to guess the right decay time for the high frequencies. So, I imagine that it must have some internal routines (probably based on AI training, statistical or probability data) to try to "reconstruct" the part of the slope that is lacking in the sound it is listening to. The more data you give it, the easier it will be for it to perform this reconstruction. It will perform better if you make it listen to, say, speaking voices, completely different instruments playing together, orchestral tutti, hand clapping… sources with a lot of frequency content: low, middle and high at the same time. If you give it a noise (even if it is cut at a given frequency), it will already contain a lot of frequency data, so the software will "think" that it will be easy to reconstruct the entire slope in the audible range and… it will do it. I don't know anything about how Chameleon actually works, but this is what I can guess from what you tell, from my point of view.


----------

