# Transducers and making sound from nothing



## LKHD (Jun 15, 2021)

My question is a scientific one. If transducers, which are part of every speaker, convert electrical signals to sound and vice versa, why is it that we always must use samples recorded from real world instruments to produce real world sounds. Why can't we just produce real sounds from a program? Is this a mathematical or processing power issue?


----------



## Lukegilson (Jun 16, 2021)

Processing power


----------



## Dietz (Jun 16, 2021)

LKHD said:


> Why can't we just produce real sounds from a program?


Physical Modelling was invented for this purpose, among other approaches.


----------



## ism (Jun 16, 2021)

I’d argue it’s the immensely complex physics of both sound combined with the immensely sophisticated human perceptual capacity to reconstruct pictures of an environment from a quasi-one dimensional signal.

Have a look a look at something like NI Reactor, and you’ll see that there are certain sounds that can be sort of ok-ishly physically modeled - simple plucked strings, for example. But the complexity of modeling the timbre and resonance of, say, a violin is another thing, especially when you consider the immensely complex dynamics of human fingers and bowing techniques and resin on individual horse hairs of the bow scraping along the string … and this doesn’t even begin to add the spatial perception of the hall you’re recording in, which imo no plugin comes close to adequately simulating. 

Physics is hard.


----------



## LKHD (Jun 16, 2021)

ism said:


> I’d argue it’s the immensely complex physics of both sound combined with the immensely sophisticated human perceptual capacity to reconstruct pictures of an environment from a quasi-one dimensional signal.
> 
> Have a look a look at something like NI Reactor, and you’ll see that there are certain sounds that can be sort of ok-ishly physically modeled - simple plucked strings, for example. But the complexity of modeling the timbre and resonance of, say, a violin is another thing, especially when you consider the immensely complex dynamics of human fingers and bowing techniques and resin on individual horse hairs of the bow scraping along the string … and this doesn’t even begin to add the spatial perception of the hall you’re recording in, which imo no plugin comes close to adequately simulating.
> 
> Physics is hard.


Thanks for all your responses. This all makes sense. I certainly look forward to the day we figure this out. ISM, I checked out NI Reactor. Very interesting. I thought Serum was the big synth of choice, but now, if I ever decide to go deeper into modular synthesis, I'm thinking Reactor would be the way to go.


----------



## Dietz (Jun 17, 2021)

LKHD said:


> I ever decide to go deeper into modular synthesis, I'm thinking Reactor would be the way to go.


Definitely!


----------



## doctoremmet (Jun 17, 2021)

LKHD said:


> Thanks for all your responses. This all makes sense. I certainly look forward to the day we figure this out. ISM, I checked out NI Reactor. Very interesting. I thought Serum was the big synth of choice, but now, if I ever decide to go deeper into modular synthesis, I'm thinking Reactor would be the way to go.


Reaktor is indeed great. If you want to listen to some physically modelled instruments, to sort of gauge the state of the art, check some Soundcloud / YT demos of these vendors / models:

1 Orchestral (solo) instruments: Audio Modeling SWAM strings, brass and woodwind instruments

2. Stringed instruments and more synthy sounddesign: Reasonstudios Friktion

3. Modeled guitar: AAS Strum GS2

4. Percussion and more synthy sounddesign: AAS Chromaphone 3

5. Synthesizer sounds with an acoustic flavour based on PM principles: Rhizomatic Plasmonic, Madrona Labs KAIVO


----------



## doctoremmet (Jun 17, 2021)

And the grandmother of all physical modelling synths, all the way back from 1993:


----------



## doctoremmet (Jun 17, 2021)

LKHD said:


> why is it that we always must use samples recorded from real world instruments to produce real world sounds


I really like your question though. A way younger doctoremmet asked this very same question once. The current doctoremmet can’t wait for processing power and models to catch up on samples. 

The SWAM instruments do render promising results, and at the same time also do still yield completely “uncanny valley” sounds at times. But still, I bet my mom would totally think she’s listening to the real thing.

You may also want to look up some older posts by fellow forum member @lychee who has made the bold choice to abandon samples and go “physically modelled” all the way. He frequently shares his experiences along that journey.


----------



## lychee (Jun 17, 2021)

Thank you @doctoremmet for quoting me so often it makes me feel important, but you should stop, because i am going to start to get cocky and look down on people like a diva. 

But to come back to the subject, I find it a mistake for developers to stay focused on the pure sample.
Maybe I took it too early to look for alternatives, but by dint of buying sample libraries which in the end often only half satisfied me, I started to change course.
The sample is a not very flexible recording, we can not capture a note and play the whole range without distorting the original sound.
To have an instrument we need one note equals a sample, and I'm not even talking about dynamics, legatos... all this to always end with plugins that overload our hard drives.

Physical modeling is the most complex alternative, which consists in mathematically recreating an instrument by its physical properties.
I understand that not many developers have ventured into this technology which must require a lot of time and money in R&D.

But even though I'm not an expert on the subject (I try to be), I know there are other ways to capture real sound and synthesize it.
The synthesis has much less constraints than the sample, and is less greedy in resources HDD, SSD, RAM (exception made of the CPU).

Plugins such as Wallender Wivi, Synful Orchestra or those of Sampleson are in the category of recreation of acoustic instruments via synthesis.
Many will find that it still sounds too synthetic, but for my part I think that all these technologies are coming of age, even if still not perfect.

This is one of the sounds I'm working on to finish my orchestral kit that I try to make sound as realistic as possible (with my poor knowledge).




Reason Friktion (physical modeling): Violins, Violas, Harp.
Wallender Wivi (synth based modeling): Trompets, Horns, Trombone, Tuba.


----------



## timprebble (Jun 17, 2021)

"can’t wait for processing power and models to catch up on samples."

We are such a long way away from this happening. What a high quality instrument, or prop, or animal, or ambience generates in the real world, as spatial audio, and when performed by a real human engaging with it and captured with high quality microphones is so complex that the idea a synthesized or modelled version will ever 'catch up' I personally believe is a technical delusion. It's not the transducer that is the issue - it's that nature is so incredibly complex.

When an entire companies resources are dedicated to modelling just one instrument (a piano) they can come close-ish to an interpretation of that instrument, but still without the unique quirks, character and variety that so easily occur IRL. Every drummer knows the quirks of their kit, every guitarist knows their own instrument, and it is unique. But also samples & recording are not limited to one instrument, and even a very expensive KYMA system which has powerful dedicated processing power, cannot even analyse and resynthesise an organic "real" sound without also revealing a digital pallour.

This is not a criticism - this is recognising we are still in the infancy of digital synthesis and processing. I love synths, but what they are each best at, is not emulating real organic sounds that occur in front of a microphone. They are best at their own unique sounds, sounds that likely could not occur in nature.


----------



## ism (Jun 17, 2021)

timprebble said:


> it's that nature is so incredibly complex


Yep, physics. 

A smaller version of this is if you ever play pool with a bunch of physicists. Technically pool is what physicists call a "solved problem" in that the physics is angle of reflection = angle of incidence and there's no Nobel prizes to be won in figure out how the nature is supposed to work on a pool table. 

But then you actually try to make the theory work ... 

I don't recommend actually playing pool with actual physicists, incidentally. And violins are massive, massively more complex than pool tables.


----------



## doctoremmet (Jun 17, 2021)

timprebble said:


> can’t wait for processing power and models to catch up on samples.


I get your point and I agree. But I didn’t state “can’t wait for processing power and models to be able to completely pass the Turing test”, now did I 

Samples aren’t THAT hard to catch up on. They’re not that accurate as snapshots of reality. And they are about as far removed of being true reflections of real physics as modelled instruments are. So the catching up you are proposing is a completely different one. And we agree on one thing: that won’t happen until someone builds The Matrix.

Pianoteq is hardly a real piano. But to many it already is the go-to for synthetic piano. And in certain aspects it is more expressive than samples. Granted, I still prefer my sampled pianos. Which brings me to a second point: when I say “catching up” I mean “in terms of rendering a very musical and expressive instrument that brings me joy and renders usable results in arrangements”. So again: we do not need to play catch up with any and all detailed chaotically complex and butterfly theory aspects of physics to have models catch up on the equally restricted (in terms of approaching reality) and in many ways more static nature of samples.


----------



## doctoremmet (Jun 17, 2021)

timprebble said:


> I love synths, but what they are each best at, is not emulating real organic sounds that occur in front of a microphone. They are best at their own unique sounds, sounds that likely could not occur in nature.


Absolutely. But to my mom, the audio of Doctor Mix playing his SWAM sax with a TEControl totally and convincingly sounds like a saoxphone player

Edit: decided to leave in the typo, because maybe it is a perfect reflection of how the same instrument sounds to me haha


----------



## lychee (Jun 18, 2021)

As said above, the sample is a "photo" of reality and we can not (at present) do more faithful and direct for the creation of an instrument.
If have said that synthesis is best for playing synthetic sound, yes, it is an indisputable logic.
But I say that the synthesis is not good only for playing synthetic, and I do not see why we should confine it to that, and that it could not have as much credit as the sample in the domain of the acoustic instrument.
What we lose in sonic fidelity we gain in playability, a well-used Swam instrument will always sound more alive than any sample-based instrument.
With physical or syntetic-based modeling, I don't have to worry about the length of the notes, I play a short note that makes a staccato, a long note do a sustain, no need to use a keyswitch for that.
Also, no need round robin, I play repetitions and each note is really different.
I could give lots of other examples but in short, between the sample and the synthetic, it's fidelity against living sound, and I chose to be alive.


----------



## lychee (Jun 18, 2021)

The guy from Sampleson is passionate about electric pianos and recently made the new Reed 106 plugin.









Reed106. A living model of a Reed Electric Piano. Mac and Win, VST/AU and Standalone versions available.


Reed106. A living model of a Reed Electric Piano. Mac and Win, VST/AU and Standalone versions available. | Sampleson.



sampleson.com





I'm just disappointed that he's only interested in Rhodes and co, the only plugin that changes direction is a guitar ... mixed with an electric piano ... but why man, why? !!!
I would have liked so much that he used his spectral modeling technology for something other than this type of instrument.

moreover, I would have liked to know more about this technology to embark on the adventure.
If anyone has any knowledge in the area of how to do this stuff, and if we can replicate that on Reaktor or some other synthesis program?
For information, I am a complete beginner in this field.


----------



## Dietz (Jun 18, 2021)

lychee said:


> What we lose in sonic fidelity we gain in playability


True, but don't forget that you will have to learn to actually _play_ each of those instruments. If this is what you're after - great. Samples can have many aspects of a (hopefully) virtuoso performance already baked-in.


lychee said:


> As said above, the sample is a "photo" of reality


Multi-samples are more like a stop-motion movie.


----------



## doctoremmet (Jun 18, 2021)

Dietz said:


> True, but don't forget that you will have to learn to actually _play_ each of those instruments. If this is what you're after - great.


Agreed. But it really differs from instrument to instrument. I find Aaron Venture’s approach to be really EASY to learn (tone suffers - and yes it is still sample based of course) VS for instance SWAM or Chris Hein.


----------



## iamnemo (Jun 18, 2021)

ism said:


> A smaller version of this is if you ever play pool with a bunch of physicists. Technically pool is what physicists call a "solved problem" in that the physics is angle of reflection = angle of incidence and there's no Nobel prizes to be won in figure out how the nature is supposed to work on a pool table.


Maybe not a Nobel prize (not the right type of prize) but the Abel prize which is the equivalent for mathematics. Check Yakov Sinai's work and dynamical billiards. Even pool tables can be chaotic!

Now speaking as a physicist I would say that the problem of physically modelled instruments is and will remain twofold. On one side the systems are generally too complex to be captured by the models even though we know the physics behind. Parts interact in complex non-linear ways, etc. Second problem is related: we cannot measure minute variations of parameters and their effect. The smallest change in the initial values rapidly change the results (similar to the butterfly effect of weather systems).

Take a very simply oscillatory system: the well-known double pendulum. Seems simply enough? Look at the effect of minute changes in the initial position




But great progress is being made regardless. Many companies (in France and Canada, related to IRCAM among others) such as Modartt and AAS are tackling the problem. I also count on AI to help as it already does in math and physics solving systems previously intractable.

Musically I prefer to use such techniques to create completely new hybrid instruments like Chromaphone does.


----------



## doctoremmet (Jun 18, 2021)

There’s also another factor:

Physics and accurate-enough models approaching the intricacies of it (from a scientific POV) versus the “human perception”, “the ear” and psycho-acoustics.

I get how you physics scholars are aiming at approaching “reality”. I’m an economy assistant professor myself, so I guess I’m more accustomed to pseudo-science anyway hahaha


----------



## doctoremmet (Jun 18, 2021)

iamnemo said:


> Maybe not a Nobel prize (not the right type of prize) but the Abel prize which is the equivalent for mathematics. Check Yakov Sinai's work and dynamical billiards. Even pool tables can be chaotic!
> 
> Now speaking as a physicist I would say that the problem of physically modelled instruments is and will remain twofold. On one side the systems are generally too complex to be captured by the models even though we know the physics behind. Parts interact in complex non-linear ways, etc. Second problem is related: we cannot measure minute variations of parameters and their effect. The smallest change in the initial values rapidly change the results (similar to the butterfly effect of weather systems).
> 
> ...



Also that pendulum makes me want to play with Generate! By no means a modelled synth, but I do love how its math transduces to air waves!


----------



## b_elliott (Jun 18, 2021)

lychee said:


> ....
> Plugins such as Wallender Wivi, Synful Orchestra or those of Sampleson are in the category of recreation of acoustic instruments via synthesis.
> Many will find that it still sounds too synthetic, but for my part I think that all these technologies are coming of age, even if still not perfect.


Interesting thread... to see Wivi coming up again. 

Some months past @re-peat mentioned to not overlook Wivi in a post I had about emulating the Zappa horn sound. His insight would be helpful in a different way to the OP:

"Don’t laugh, but what I would certainly reach for is Wallander’s WIVI. Not the most realistic sounding brass emulations on the market, but kneadable as butter and it’s precisely that slighty synthetic quality of theirs that I think would fit quite well in a mock-FZ universe." [re-peat]


----------



## b_elliott (Jun 18, 2021)

iamnemo said:


> Now speaking as a physicist I would say that the problem of physically modelled instruments is and will remain twofold. On one side the systems are generally too complex to be captured by the models even though we know the physics behind. Parts interact in complex non-linear ways, etc. Second problem is related: we cannot measure minute variations of parameters and their effect. The smallest change in the initial values rapidly change the results (similar to the butterfly effect of weather systems).


Yep. When I first saw this thread I was going to link to a revealing video of a mid-60s, overweight Cuban dude playing bongos. As stupid simple an instrument, his techniques to manipulate overtones, varied slaps highlighted just how complex things get under a pro's hands. 

Note: Youtube failed me as I could not locate that video; but it spoke volumes to me last week similar to what iammemo so eloquently wrote.


----------



## iamnemo (Jun 18, 2021)

doctoremmet said:


> Physics and accurate-enough models approaching the intricacies of it (from a scientific POV) versus the “human perception”, “the ear” and psycho-acoustics.
> 
> I get how you physics scholars are aiming at approaching “reality”. I’m an economy assistant professor myself, so I guess I’m more accustomed to pseudo-science anyway hahaha


I cannot comment on the second part 🤣 but I agree 100% with the first. I did not want to reroute the thread but the physical modelling is only one part of the problem. As you mention then you have the reproduction chain, room acoustics, ear physiology, hearing neurology, perception, mood, attention, illusions, etc.

I'm also fascinated by the neurology of it all. Vast subject! Check for ex. https://www.zlab.mcgill.ca/ and Oliver Sachs, etc.


----------



## iamnemo (Jun 18, 2021)

doctoremmet said:


> Also that pendulum makes me want to play with Generate! By no means a modelled synth, but I do love how its math transduces to air waves!


No sound but here's a playground:
https://www.myphysicslab.com/index-en.html


----------



## doctoremmet (Jun 18, 2021)

I trust you totally agree with my assessment in the second part, but are too polite to admit it 

Cool link. Never did much research, but I think I will now. Great stuff and very interesting!


----------



## doctoremmet (Jun 18, 2021)

b_elliott said:


> Interesting thread... to see Wivi coming up again.
> 
> Some months past @re-peat mentioned to not overlook Wivi in a post I had about emulating the Zappa horn sound. His insight would be helpful in a different way to the OP:
> 
> "Don’t laugh, but what I would certainly reach for is Wallander’s WIVI. Not the most realistic sounding brass emulations on the market, but kneadable as butter and it’s precisely that slighty synthetic quality of theirs that I think would fit quite well in a mock-FZ universe." [re-peat]


Two things:

1) I highly respect Piet’s @re-peat viewpoints on all things music- and VI-related. So coming from him it usually means he has had substantial real life exposure to the subject matter (me - well by now we all know I get all of my knowledge from watching Youtube walkthroughs, right?)

2) I have heard some of the pretty convincing musical phrases @lychee has been able to get out the WIVI instruments (sorry to bring up your name AGAIN pal, but you do it to yourself by being so focused and willing to put in the time to really learn your way around PM instruments)

I agree with Piet’s assessment, that a “realistic” tone may not always be the ultimate objective. “Playability” or even a “not-quite-real-yet-fittingly-expressive” quality may be more important. Orrrr.... more FUN to play.

Is my understanding correct that the WIVI instruments do not seem to get much attention from the developer, in terms of future expansions, updates and such?


----------



## iamnemo (Jun 18, 2021)

b_elliott said:


> As stupid simple an instrument, his techniques to manipulate overtones, varied slaps highlighted just how complex things get under a pro's hands.


The apparent simplicity is very deceptive. Check for ex books by Rossing, Fletcher (yes _the_ one) and others such as:

Science of Percussion Instruments
The Physics of Musical Instruments
Vibration of Plates & Vibration of Shells
Etc.

I have about 200 such books and they barely start to cover this subject! Now imagine the physics of a piano or a cello  with such things as aliquot stringing and "sympathetic resonances". Inhomogeneous materials, variations with temp/humidity, etc. Pure nightmare. Respect to Modartt, etc.


----------



## Kent (Jun 18, 2021)

iamnemo said:


> Maybe not a Nobel prize (not the right type of prize) but the Abel prize which is the equivalent for mathematics. Check Yakov Sinai's work and dynamical billiards. Even pool tables can be chaotic!
> 
> Now speaking as a physicist I would say that the problem of physically modelled instruments is and will remain twofold. On one side the systems are generally too complex to be captured by the models even though we know the physics behind. Parts interact in complex non-linear ways, etc. Second problem is related: we cannot measure minute variations of parameters and their effect. The smallest change in the initial values rapidly change the results (similar to the butterfly effect of weather systems).
> 
> ...



as always, there is a relevant XKCD:









Purity







xkcd.com


----------



## doctoremmet (Jun 18, 2021)

Reading all this and letting it sink in, I feel what we need now is one of those drawings done by @ism. With some sort of soothing (or rather disturbing) venn diagram, showing sweet spots where “close enough” modelling of physics and “not very receptive” psycho-acoustic and neurologic attributes of the human species “overlap”.


----------



## doctoremmet (Jun 18, 2021)

kmaster said:


> as always, there is a relevant XKCD:
> 
> 
> 
> ...


There’s a reason economists aren’t even mentioned. The only slightly usable and fun field in economics I ever felt had some actual scientific substance was Game Theory. But that was done by psychologists haha


----------



## doctoremmet (Jun 18, 2021)

Oh. And to those mathematicians the only proper response is: that ain’t science, it’s merely a system


----------



## iamnemo (Jun 18, 2021)

@kmaster you forgot to add to the right:
philosophers, neurologists (because all of it is in our head), then again biologists, chemists, physicists, etc..... Ad infinitum! 

Einstein: "chemistry is too complex for the chemists"
Russell: "physics is essentially mathematics"
WIgner: _The Unreasonable Effectiveness of Mathematics in the Natural Sciences_
And many others.  Sorry, could no resist.  Gotta go! Was fun!


----------



## lychee (Jun 18, 2021)

The discussion went too far for my little dunce brain, lol.  

Am I right or wrong, everyone will have their opinion but I believe that the question is not there.
The question is not to seek to simulate all the possible interactions of an instrument with its environment in all its complexity, but more to recreate the essential, what the ear and the human mind manage to translate.
Nothing serves to complicated the task, and I do not think that the rare actors of physical modeling have taken physics to the extreme, and yet it works (in my ear anyway).

I think the skeptics should give Friktion a try, and if that doesn't convince them, it would at least show them that we are close to a turning point in music. By using this plugin, I felt that I passed a course, my old sounds made on the sample entered a new dimension.

Regarding Wivi, it's a plugin that doesn't necessarily ring right away, but it has a myriad of options (too many), which can lead to exactly what you want.
Too bad that its developer abandoned the project, because it is a very good plugin that does not have to be ashamed of the competition in my own opinion.


----------



## doctoremmet (Jun 18, 2021)

lychee said:


> The question is not to seek to simulate all the possible interactions of an instrument with its environment in all its complexity, but more to recreate the essential, what the ear and the human mind manage to translate.


This is the point I was trying to make as well, but did not manage to express as concise as you have here.


----------



## lychee (Jun 18, 2021)

In my research to recreate an acoustic sound on the basis of synthesis like Wivi, Sampleson... I just discovered Spear, who is able to transform a sample into synthesis (spectral synthesis).




Of course we lose a bit of fidelity in passing (already that a sample is not 100% faithful to the original), but the result still seems correct.
There is certainly other software of the kind, but not being an expert in the field, I wonder why few people don't think of using this data to recreate an acoustic instrument which as said before would be much more flexible than the sample?
For example, to simulate the dynamics of a sampled sound, we would have to do a phase alignment and standardize the timbres in order to be able to make a clean crossfade of different samples recorded at different volume levels... it's tedious.
While with the synthesis it would be enough just to make a morphing of different data of the sound.

Anyway, I'm interested in this barely emerging side of the iceberg, and once again if there's an expert hanging around here, I'd love to hear from him.


----------



## timprebble (Jun 18, 2021)

doctoremmet said:


> Samples aren’t THAT hard to catch up on. They’re not that accurate as snapshots of reality. And they are about as far removed of being true reflections of real physics as modelled instruments are. So the catching up you are proposing is a completely different one.




We're just going to have to agree to disagree about that one.

Modelling is only "similar" in a very few, very restricted realms ie some specific traditional instruments. You could make a fairly short list of what is even available as a modelled instrument. And then you could make a list of the thousands and thousands of instruments currently available as deep sampled emulations. See the gap?

Such a long way to go, and in reality they will never be the same for the exact reason I mention: nature is so complex. So sure, someone may eventually make a modelled gong, that is perfect at a specific gong... but not all gongs, because no ones imagination or tech bro skills are so great that they can even try to emulate eg the different methods of making Balinese gongs or Thai gongs or Chinese gongs or some junk store gong etc which are all variations that already exist.

The uncanny valley does not hold up when viewing the world. The potential of samples are
impossible to catch up on, because modelling is limited by the humans programming it, even if machine learning is used, imho it is barking up the wrong tree.. but YMMV because in the end whatever works for you, works for you!


----------



## lychee (Jun 19, 2021)

timprebble said:


> We're just going to have to agree to disagree about that one.
> 
> Modelling is only "similar" in a very few, very restricted realms ie some specific traditional instruments. You could make a fairly short list of what is even available as a modelled instrument. And then you could make a list of the thousands and thousands of instruments currently available as deep sampled emulations. See the gap?
> 
> ...


Both sample and physical modeling are not the ultimate solutions.
In the case of a gong the sample will sound more easily just as a "photo" of the original instrument, but as you said it will be necessary to sample it deeply to have as much detail as possible, and we speaks of a "simple" instrument.
For modeling, you say it yourself, the limit is the human and from my point of view the human imagination has no limits.
But there is something I do not understand, I have been talking about a third solution for a while that everyone seems to ignore.
In my post before that, I mentioned that there are solutions to synthesize rather than sample the sound, so as to have a "photo" of the instrument, but a more malleable material than the sample.
It would allow you to have the very essence of the original instrument, and why not combine that with physical modeling for all the behavior specific to the instrument.

I have found another program which can illustrate the need to overcome the sample or rather its limitations.
Backbone resynthesizes the samples that we offer to go beyond the traditional sample:


----------



## thesteelydane (Jun 19, 2021)

LKHD said:


> My question is a scientific one. If transducers, which are part of every speaker, convert electrical signals to sound and vice versa, why is it that we always must use samples recorded from real world instruments to produce real world sounds. Why can't we just produce real sounds from a program? Is this a mathematical or processing power issue?


Don't forget you're not just recording a real instrument, but more importantly also a real musician who has spent a lifetime mastering their instrument - an no modelling will ever get close to human craftsmanship and imagination. At least I hope not. Ironically this is also why even sampling real instruments sometimes fails to sound real. When you record a single note at a time you completely remove the musical imagination and intent from the performance and no amount of programming and scripting can put it back in.


----------



## ism (Jun 19, 2021)

At best wholly modelled instruments will give you something like "digital puppets", perhaps analogous to the way that Pixar characters are animated by puppeteers. Which is a form of acting. And can still express quite a lot of emotion, but it's necessarily very stylized, and never really going to match a human actor. Or rather - why would you want to? Maybe a better way is to say it's never going to match a human performance with less effort that just hiring a human actor. 

In some ways, this is already what we have with conventionally sampled instruments.


----------



## doctoremmet (Jun 19, 2021)

I get all this, but on the other hand the so called “human factor” in samples that gets so much praise is also a bit of crap. No offense guys haha. Hear me out.

Because I have to say that some of the 8 bit half-a-second chord stabs Kevin Saunderson lifted off of some disco album and sampled in his Mirage and then used on Good Life sound completely expressive and emotional to me. More so than many “a bespoke multi sample”. So I think the actual creativity by the end user ultimately is more important.


----------



## doctoremmet (Jun 19, 2021)

ism said:


> At best wholly modelled instruments will give you something like "digital puppets", perhaps analogous to the way that Pixar characters are animated by puppeteers.


I doubt if the same can’t be said about samples. Good luck recreating any Ligeti score without reverting to actually sampling the entire score in your Akai S3000 and pressing middle C


----------



## ism (Jun 19, 2021)

I do wonder what's going to be possible though. In, say 10 or 20 or 30 years. When everything above a low end toaster will come with 10s or 100s or 1000s of processors. Especially if the ultra fast hardware linear algebra of the neuromorphic chips can be leverages in something like Fourier synthesis. 

It's not that the industry is going to invest the necessary 10s of billions to make a hardware based computational paradigm for next generation sampling .. but if it just so happens that the 10s of billions being invested for machine learning chips happens to implement the same mathematics as sound, things could get interesting.

I wonder though, if you had budget of, say 10 or 20 billion quid a year, and a mandate over 20 or 30 years to make sample libraries better - I mean, say the earth was going to be invaded by aliens or something if human sample libraries couldn't be improved, so it becomes a the Manhattan Project of sampling - what would be possible?

And we're already seeing this kind of investment in neuromorphic, and perhaps soon, quantum paradigms of computational hardware. So who knows.


----------



## doctoremmet (Jun 19, 2021)

ism said:


> say the earth was going to be invaded by aliens or something if human sample libraries couldn't be improved, so it becomes a the Manhattan Project of sampling - what would be possible?


Here’s a song that sums up my thought process about the most likely outcome:




Lyrics (contains curse words):

Children are innocent
A teenager's fucked up in the head
Adults are even more fucked up
And elderlies are like children

Will there be another race
To come along and take over for us?
Maybe martians could do
Better than we've done
We'll make great pets! 

My friend says we're like the dinosaurs
Only we are doing ourselves in
Much faster than they
Ever did

We'll make great pets!


----------



## doctoremmet (Jun 19, 2021)

ism said:


> So who knows.


Since my body (heart) kind of fails me, I can’t wait for the “brain-in-a-vat” scenario to become reality.


----------



## thesteelydane (Jun 19, 2021)

ism said:


> I do wonder what's going to be possible though. In, say 10 or 20 or 30 years. When everything above a low end toaster will come with 10s or 100s or 1000s of processors. Especially if the ultra fast hardware linear algebra of the neuromorphic chips can be leverages in something like Fourier synthesis.
> 
> It's not that the industry is going to invest the necessary 10s of billions to make a hardware based computational paradigm for next generation sampling .. but if it just so happens that the 10s of billions being invested for machine learning chips happens to implement the same mathematics as sound, things could get interesting.
> 
> ...


Yes, but why would we? It would be easier and much more rewarding to learn a real instrument. I personally find striving for realism in sampling boring, I think using sampling to make the impossible possible is much more interesting and almost an art form in itself. At least that's what keeps me going...


----------



## doctoremmet (Jun 19, 2021)

thesteelydane said:


> I personally find striving for realism in sampling boring


This  and yes Nikolaj. This is an art form. One that is performed by people like yourself and Pendle Poucher.


----------



## thesteelydane (Jun 19, 2021)

doctoremmet said:


> This  and yes Nikolaj. This is an art form. One that is performed by people like yourself and Pendle Poucher.


Well I just finished editing 2306 short samples, and it took me exactly 147 hours (yes, I track my hours). Feels more like a job than an art form at this point to be honest.


----------



## doctoremmet (Jun 19, 2021)

thesteelydane said:


> Well I just finished editing 2306 short samples, and it took me exactly 147 hours (yes, I track my hours). Feels more like a job than an art form at this point to be honest.


I get that. And respect it even more. If you put a “buy everything I may or may not eventually succeed to create and release” button on your site, I’d immediately click it though.

No clue to what extent Danes can reasonably be expected to decipher Dutch, but we have this saying: de kost gaat voor de baat uit.

(omkostningerne går ud til fordel)

Which is true. But sometimes artists also need patreons, muses and that whole shebang.


----------



## d.healey (Jun 19, 2021)

thesteelydane said:


> Well I just finished editing 2306 short samples, and it took me exactly 147 hours (yes, I track my hours). Feels more like a job than an art form at this point to be honest.


That's a lot of time! That amount of samples would usually take me a few hours. Send me a PM if you'd like to tell me about your editing process and maybe I could suggest some things to speed it up.


----------



## ism (Jun 19, 2021)

thesteelydane said:


> Yes, but why would we? It would be easier and much more rewarding to learn a real instrument. I personally find striving for realism in sampling boring, I think using sampling to make the impossible possible is much more interesting and almost an art form in itself. At least that's what keeps me going...


Which is cool.

But it's also cool that I'm able to write a symphonic piece for an audience of one, and send it via email.

I'd rather write something that's deeply meaningful to one person, that something that gets a millions casual listens on Spotify.

This is not a approach to music that one could ever make a living out of. But Jane Austen wrote her novels, at first, largely for her family and a handful of trusted friends. And there are theories that the genius by which she set the course of the English novel for the next couple of centuries was set in motion by this context of hyper locality in ways that might have played out very differently had she been published and hobnobbing with the literati a decade earlier.

Not that this makes working with real orchestras and musicians and cathedrals any less magnificent. Just that, personally, I greatly appreciate even the fractional ability to approach this magnificence with samples. But also that, on a larger canvas, this might also allow new types of orchestral music to be written that in a world constrained by the costs and social determinants of real orchstras, might not have otherwise.

Not much of it will be good, of course (witness at least 99% of my compositions). But you never know when a new Jane Austen might emerge.


----------



## thesteelydane (Jun 19, 2021)

d.healey said:


> That's a lot of time! That amount of samples would usually take me a few hours. Send me a PM if you'd like to tell me about your editing process and maybe I could suggest some things to speed it up.


Thanks, will do. A lot of that was precision editing in RX removing unwanted noised and resonances, then tuning, balancing and time aligning. I do everything by ear because I believe it makes the end product more musical, but even so it surprised me it would take this long.


----------



## d.healey (Jun 19, 2021)

thesteelydane said:


> Thanks, will do. A lot of that was precision editing in RX removing unwanted noised and resonances, then tuning, balancing and time aligning. I do everything by ear because I believe it makes the end product more musical, but even so it surprised me it would take this long.


Ah that explains it, I don't do a lot of cleanup work. I try to catch things during the session and re-record them. Occassionaly I'll get my hands dirty though and start poking around with a spectrograph.


----------



## gsilbers (Jun 21, 2021)

Might be 


LKHD said:


> My question is a scientific one. If transducers, which are part of every speaker, convert electrical signals to sound and vice versa, why is it that we always must use samples recorded from real world instruments to produce real world sounds. Why can't we just produce real sounds from a program? Is this a mathematical or processing power issue?


I think it’s a huge undertaking. Slowly getting there. Physical modeling still needs a lot of human and real world input. and that needs to be paid somehow.
And going too far with a very realistic instrument that maybe fails economically would not help either.

chromaphone and other companies are doing great on some instruments.

But performance wise it might take while to get to a decent level.

with that said, the road to getting there is giving us some very cool stuff. I’ve been playing the akustic instrument from reason and there are some insane sounds that sound like a mix of real with something else. 
very cool. Great for modern styles of music


----------



## LKHD (Nov 19, 2021)

lychee said:


> This is one of the sounds I'm working on to finish my orchestral kit that I try to make sound as realistic as possible (with my poor knowledge).
> 
> 
> 
> ...



Yeah, that's not bad. Definitely sounds synthetic, but about as synthetic as sample libraries sounded back in the 90s. So I'd say that's an indication toward potential development.


----------



## LKHD (Nov 19, 2021)

doctoremmet said:


> And the grandmother of all physical modelling synths, all the way back from 1993:



Yeah, that's sounds really good. I don't think I've heard a synthesizer play folk style music with such appropriateness before.


----------



## LKHD (Nov 19, 2021)

timprebble said:


> "can’t wait for processing power and models to catch up on samples."
> 
> We are such a long way away from this happening. What a high quality instrument, or prop, or animal, or ambience generates in the real world, as spatial audio, and when performed by a real human engaging with it and captured with high quality microphones is so complex that the idea a synthesized or modelled version will ever 'catch up' I personally believe is a technical delusion. It's not the transducer that is the issue - it's that nature is so incredibly complex.


I imagine that regardless of a thing's complexity, it is certainly possible even if currently inconceivable.


----------



## timprebble (Nov 19, 2021)

LKHD said:


> I imagine that regardless of a thing's complexity, it is certainly possible even if currently inconceivable.


I dont believe it is, necessarily.
Take a sound like a thunderstorm, with lightning strikes. The scale of the forces of nature are not going to be emulated by cpus, and some kind of sky-based sound system larger and more powerful than anything in existence…. and unquantised in range, spectrum and scope. Thats what I mean by technical delusion.


----------



## Pier (Nov 22, 2021)

LKHD said:


> I imagine that regardless of a thing's complexity, it is certainly possible even if currently inconceivable.


Everything can be modeled. It's really a matter of time, interest, and CPU power.

For example, I have no doubt there will be a modeling of the human brain at some point in the future. I don't think we will get to see it though. I also have no doubt in a couple of decades we won't be using samples anymore for acoustic instruments.

20 years ago, lots of people (me included) were skeptical of modeling analog synths and look where we are now.



timprebble said:


> Take a sound like a thunderstorm, with lightning strikes. The scale of the forces of nature are not going to be emulated by cpus, and some kind of sky-based sound system larger and more powerful than anything in existence…. and unquantised in range, spectrum and scope. Thats what I mean by technical delusion.


At some point it gets so fine our senses are not good enough to perceive quantization. Eg: No human can hear aliasing in digital audio recorded at 192Khz.

As for modeling the sounds of a storm, I'd be surprised if it couldn't be done with today's technology.


----------



## Tatiana Gordeeva (Nov 22, 2021)

Pier said:


> Everything can be modeled. It's really a matter of time, interest, and CPU power.


True, except for chaotic systems apparently. They can be simulated but are too sensitive on initial conditions. The so-called butterfly effect and all that. 


Pier said:


> For example, I have no doubt there will be a modeling of the human brain at some point in the future. I don't think we will get to see it though. I also have no doubt in a couple of decades we won't be using samples anymore for acoustic instruments.


Absolutely agree. I've written about it many times in the Off-Topics forum.


Pier said:


> 20 years ago, lots of people (me included) were skeptical of modeling analog synths and look where we are now.


Exactly!


Pier said:


> At some point it gets so fine our senses are not good enough to perceive quantization. Eg: No human can hear aliasing in digital audio recorded at 192Khz.


Our senses are extremely limited and are surpassed by our technologies.


Pier said:


> As for modeling the sounds of a storm, I'd be surprised if it couldn't be done with today's technology.


Our _perception_ of lightning (sound, light, fields, etc) is easily simulated. Even the phenomenon itself, with millions of volts, can be reproduced in labs like this one that I visited some time ago:





Btw the building is 20 stories high


----------



## Pier (Nov 22, 2021)

Tatiana Gordeeva said:


> Our _perception_ of lightning (sound, light, fields, etc) is easily simulated. Even the phenomenon itself, with millions of volts, can be reproduced in labs like this one that I visited some time ago:


Wow that is straight out of a sci fi film!


----------



## Tatiana Gordeeva (Nov 22, 2021)

Pier said:


> Wow that is straight out of a sci fi film!


Yup, I think it was actually used in one of the Highlander movies or something like that.


----------



## timprebble (Nov 22, 2021)

Pier said:


> As for modeling the sounds of a storm, I'd be surprised if it couldn't be done with today's technology.


seriously? Have you ever stood outside & experienced a thunderstorm? Thunder rippling across the sky with more power & definition than the best IMAX ever? Because no one is modelling that. And no human sound system could reproduce it. Even if they tried, it would be like taking an iPhone photo. A tiny snapshot of something that is beyond the ability of humans to create. This is the technical delusion I keep referring to: how hard could it be to 'model' something like this, with the definition and power of nature where eg lightning can destroy 100 year old tree in microseconds. Here's a hint: look at the VFX in films created by investing $50 million+ by a massive team of people with incredible resources... and they can't even make a human form believeable. The uncanny valley gets you to a cartoon version of a human.


----------



## timprebble (Nov 22, 2021)

Tatiana Gordeeva said:


> True, except for chaotic systems apparently. They can be simulated but are too sensitive on initial conditions. The so-called butterfly effect and all that.
> 
> Absolutely agree. I've written about it many times in the Off-Topics forum.
> 
> ...


And guess what? That simulation would never be mistaken for the real thing, experienced with the acoustics of a mountain range and with so many more magnitudes of power. Your term is right: simulation. 


"Our senses are extremely limited and are surpassed by our technologies than that tech or any tech humans have created."


I could not disagree more.
No microphone exists that can capture what a human experiences in the moment with their body and ears.


----------



## Pier (Nov 22, 2021)

timprebble said:


> seriously? Have you ever stood outside & experienced a thunderstorm? Thunder rippling across the sky with more power & definition than the best IMAX ever? Because no one is modelling that. And even if they tried, it would be like taking an iPhone photo. A tiny snapshot of something that is beyond the ability of humans to create. This is the technical delusion I keep referring to: how hard could it be to 'model' something like this, with the definition and power of nature where eg lightning can destroy 100 year old tree in microseconds.


I think you're not actually talking about mathematical modeling but rather about the reproduction of a physical phenomenon, right?



> Here's a hint: look at the VFX in films created by investing $50 million+ by a massive team of people with incredible resources... and they can't even make a human form believeable. The uncanny valley gets you to a cartoon version of a human.



I don't think that statement is accurate though. VFX teams never have $50M to make a single perfect human and are always working under pressure with tight deadlines.

I agree CGI humans on films look awful, but you can't deny they're getting better compared to 20 years ago.


----------



## timprebble (Nov 22, 2021)

Pier said:


> I think you're not actually talking about mathematical modeling but rather about the reproduction of a physical phenomenon, right?


I am responding to claims that any sound or instrument can be mathematically modelled in such a way that it is indistinguisable from the real sound event.


----------



## Pier (Nov 22, 2021)

timprebble said:


> I am responding to claims that any sound or instrument can be mathematically modelled in such a way that it is indistinguisable from the real sound event.


Then I'm sure if we had the budget to put a team with a super computer they could model the sound of a storm, and it would be indistinguishable when listening through headphones compared to a recording of a real storm.

Of course, I have no way to prove that claim, but I don't see why it wouldn't be mathematically feasible. We'll probably never know. Who would put money to do that?


----------



## Tatiana Gordeeva (Nov 22, 2021)

timprebble said:


> I could not disagree more.
> No microphone exists that can capture what a human experiences in the moment with their body and ears.


You're referring to the psychoacoustic, emotional/neurological experience, not the sensorial response to physical stimuli. What goes on in your brain after the senses are involved is not part of them. The illusion of music (that's what it is after all) is all hapenning in your brain through your senses than can easily be fooled _completely._ 

BTW microphones capture sounds well below human lowest hearing intensity threshold and outside our frequency range (infrasounds and ultrasounds). As for "echoing mountains"...Easily simulated as well, as we all well know


----------



## Tatiana Gordeeva (Nov 22, 2021)

Pier said:


> I agree CGI humans on films look awful, but you can't deny they're getting better compared to 20 years ago.


Not anymore  I remember this recent episode on Netflix called _Snow in the Desert. _The CGI was amazingly realistic...And we're just starting. Imagine in 20 years!




Also completely artificial human faces generated on the fly each time your reload this page:




__





This Person Does Not Exist


This Person Does Not Exist




thispersondoesnotexist.com





It's time to wake up and sniff the virtual coffee!


----------



## timprebble (Nov 22, 2021)

Pier said:


> Then I'm sure if we had the budget to put a team with a super computer they could model the sound of a storm, and it would be indistinguishable when listening through headphones compared to a recording of a real storm.
> 
> Of course, I have no way to prove that claim, but I don't see why it wouldn't be mathematically feasible. We'll probably never know. Who would put money to do that?


listening through headphones? You're kidding


----------



## Pier (Nov 22, 2021)

timprebble said:


> listening through headphones? You're kidding


If we're talking about the mathematical model, then no, I'm serious.

If you're talking about reproducing the physical experience, then this is not about mathematical modeling but engineering and whatnot.


----------



## Tatiana Gordeeva (Nov 22, 2021)

I remember the time when people, seeing images like that one below, thought it meant that the "rough, squarish" digital signal would never be able to represent faithfully the "smooth" analog original signal. As I was later was explained, it is a _gross misunderstanding_ of how a signal is digitized, ignoring the filtering process that goes on before the A-D conversion. The resulting digital signal, given a bit of proper care in how you do it, is an _absolutely mathematically perfect representation_ of the original analog signal. _Nothing _from it is left out at all!


----------



## timprebble (Nov 22, 2021)

Tatiana Gordeeva said:


> Not anymore  I remember this recent episode on Netflix called _Snow in the Desert. _The CGI was amazingly realistic...And we're just starting. Imagine in 20 years!
> 
> 
> 
> ...



Sorry but your 'amazingly realistic' is what I consider a cartoon version of a human. 

And yes have seen the GAN thing that every trollbot on twitter uses for their fake profile. Thats an example of my iPhone snapshot exactly. Being able to generate crud like that has little to do with creating a human functional form with infinite detail and character, indistinguishable from the real thing. As far as photos go, people were capturing beyond gigapixel photos back in the 1920s - creating a realistic snapshot is not what we are talking about, is it?


----------



## Pier (Nov 22, 2021)

Tatiana Gordeeva said:


> Not anymore  I remember this recent episode on Netflix called _Snow in the Desert. _The CGI was amazingly realistic...And we're just starting. Imagine in 20 years!
> 
> 
> 
> ...



I love LDR and that episode in particular was very realistic. It still looks like CGI though!


----------



## Tatiana Gordeeva (Nov 22, 2021)

Pier said:


> I love LDR and that episode in particular was very realistic. It still looks like CGI though!


Agreed, but as I said: imagine what will be possible in 20 years, or probably less as things are progressing at an insane rate.


----------



## timprebble (Nov 22, 2021)

Tatiana Gordeeva said:


> I remember the time when people, seeing images like that one below, thought it meant that the "rough, squarish" digital signal would never be able to represent faithfully the "smooth" analog original signal. As I was later was explained, it is a _gross misunderstanding_ of how a signal is digitized, ignoring the filtering process that goes on before the A-D conversion. The resulting digital signal, given a bit of proper care in how you do it, is an _absolutely mathematically perfect representation_ of the original analog signal. _Nothing _from it is left out at all!


So you think a digital sample is equal to the analog original? It takes me less than 2 seconds in front a real piano to know that is not true. And you're talking to someone who owns a pair of Sanken CUX100k microphones flat up to 100kHz...


----------



## Tatiana Gordeeva (Nov 22, 2021)

timprebble said:


> ...creating a realistic snapshot is not what we are talking about, is it?


No, we're talking about creating a sequence of them at a rate and precision that can easily fool your, or anyone else's, senses.


----------



## timprebble (Nov 22, 2021)

Tatiana Gordeeva said:


> No, we're talking about creating a sequence of them at a rate and precision that can easily fool your, or anyone else's, senses.



Sorry but briefly 'fooling the eye' or ear is a very low bar.
That is not what was being asked by the OP


----------



## Pier (Nov 22, 2021)

timprebble said:


> As far as photos go, people were capturing beyond gigapixel photos back in the 1920s


Yeah but what's the point you're arguing?

It's a fact digital sensors have surpassed film a long time ago in the data they can acquire. Not only in detail but specially in dynamic range and sensitivity. That's why scientists are using digital sensors for astronomy and not film.

Even cheap prosumer cameras have been able to shoot under moonlight for years, something which is impossible in analog without a lot of noise.





If you're arguing that a 300MP digital camera (comparable in detail as large format cameras of the 1920s) is prohibitively expensive, of course I agree. But the truth is this is not a limit in the technology but the economics of it.

For reference, here's a blog post comparing a 150MP camera with 8x10 large format film.









8x10 Film vs 150MP Digital: Can 150 Megapixels Compete?


Over 8 years ago, we over at On Landscape performed a mammoth “Big Camera Comparison.” We compared medium format and large format film against various




petapixel.com


----------



## timprebble (Nov 22, 2021)

Pier said:


> Yeah but what's the point you're arguing?
> 
> It's a fact digital sensors have surpassed film a long time ago in the data they can acquire. Not only in detail but specially in dynamic range and sensitivity. That's why scientists are using digital sensors for astronomy and not film.
> 
> ...




My point is that a snapshot is not the goal. If it was then we got there in 1920. The goal is also not a series of snapshots strung together to fool the ear/eye. My attitude comes from the tech bro 'everything can be modelled and recreated in the computer' when clearly it cannot because reality is infinite and infinitely variable. A snapshot is the best we can do.

The VFX human examples are so revealing because the VFX industry is so far ahead of the music industry (in tech, budget, research, resources, scrutiny, resolution, details etc etc) and yet they can only create cartoon versions of humans that no director would mistake for a real actor. Even if that virtual human briefly looked 'perfectly human' would it salivate? would it literally crap itself as its stomach fills with fear? No, we get some version of a human form which no matter how much tweaking is done, is clearly not a real human and that is clear within seconds.

An actor is not just a physical snapshot. An actor brings with them their entire lifes experience, and all of the things that make them human and not a simulation. That little smile or the tear in their eye or some other tiny detail, or 'imperfection' are all things that no one would ever 'model' accurately because they would never think to...

At best, what we get from a digital version - sampled, modelled, rendered - is a tiny, tiny snapshot.




ps re the "150MP camera with 8x10" the 1920 gigapixel photo I mentioned has a negative a little larger than 8x10" - the neg is 51" x 11", so to match this 1920s photo, would require 6 x 150MP cameras


----------



## Pier (Nov 23, 2021)

timprebble said:


> My attitude comes from the tech bro 'everything can be modelled and recreated in the computer' when clearly it cannot because reality is infinite and infinitely variable.


But everything can indeed be modeled mathematically, at least theoretically. We (you and me) might never see a perfect model of a human using CGI, but theoretically speaking it's really only a matter of time. Again, the improvements in the past 20 years have been massive.

I will post again this CGI human from Blade Runner 2049. Nothing from the 2000s era comes even close to this.







The fundamental problem, I think, is that you're talking about more than the mathematical model itself. Again, you're talking about "recreating" a physical phenomenon, and that goes well beyond the model itself.

Also, reality is finite if you consider there are some very concrete rules by which the universe is governed. Of course the number of combinations is very very very very big, but not infinite, even less at the scale of human perception. Another point to take into account is that science still hasn't found a definite answer to whether reality is quantized or not. See Planck time for example.

I don't think humans will ever be able to control reality itself. That does seem pretty far fetched. Although who knows? Maybe someone in 10,000 years will find a way to manipulate sub atomic particles and be able to recreate, well, anything.


----------



## timprebble (Nov 24, 2021)

Pier said:


> But everything can indeed be modeled mathematically, at least theoretically. We (you and me) might never see a perfect model of a human using CGI, but theoretically speaking it's really only a matter of time. Again, the improvements in the past 20 years have been massive.
> 
> I will post again this CGI human from Blade Runner 2049. Nothing from the 2000s era comes even close to this.
> 
> ...


And again my point is: what is "a perfect model of a human using CGI"?
Do you not see the complete and utter paradox?


To me these aspirational goals are like saying 'humans will live on Mars'
Yes of course they 'might' at some point and yes lots of work is being done towards such goals, but that does not make the end result a certainty, just by wishing it so. I could also say "a perfect model of a human using CGI will never exist" and also be correct as far as our current reality is concerned. 

The theory these extremely difficult projects will be achieved is pure tech optimism, and while I support optimistic thinking you only have to consider how a combination of future covids & climate change could easily mean such idealistic projects are thwarted for more pressing issues like survival, within our lifetime.


----------

