# Is this 2022? Or still 1996?



## pefra (Oct 24, 2022)

Hi there,
I learned Cubase in the last century and quickly found myself writing scores with Cubase VST 5. Back then the first VST's arrived, which have since evolved into wonderful libraries witch you can use to create almost perfect mockups of orchestral pieces.

2022 - composing (not engraving!) with software: After all these years I still find myself fiddling around with Control Changes, drawing lines, putting in articulation commands with a mouse and a keyboard, recording Expression, Modulation, and when I change the corresponding library I have to start all over again to make it sound as it did before. Sure, all DAW's have evolved to a point where almost nothing can be introduced to keep forcing you to spent money on them (ok, that's another story for a long winter evening), and that's also true for the notation departments in every big DAW, which have evolved to a point where they are almost useable.

My point is: The last decades we have been mostly using a blue hammer (Sibelius) or a red hammer (Finale) as someone on the forum put it, or some other hammers with even more funny colors. Then Dorico arrived where you can now choose the color you want your hammer to have, and it also comes with a finer chisel. The reality is: We are (still and again) drawing, clicking, pointing, moving lines in a "new" key editor. Even after decades of development:

Why are we still drawing lines with a mouse at all when the only thing we want is a flute sound that behaves and sounds like a flute? In 2022?

Enter Noteperformer / Staffpad. YES! FINALLY a new level. Hack in your notes, enter p, ppp, f, fff, hairpins, whatever, and there you have it!

This is how I want it to be in 2022! That's where the future of composing in notation lies! Why do we still need instrument libraries based on simple samples in 2022? Don't get me wrong, the samples are nice, but they behave different, no matter how many round robin levels you put in. And they don't interact, at least not as far as I can hear. Listen to ONE flute, then introduce another one. Result: A different sound because the flutes mix. Now do this with a sound library. Result: Now you hear TWO voices - and they don't mix. After all and no matter how good the underlying samples are they are still just samples. Now, when I want them to sound like a real flute it's still me who has to do all the work. Fine, but I HAVE A COMPUTER. I want it to do the work for me, that's what computers have been invented for! I don't want to use a hammer any longer! I want to show my computer a hand drawing and have it start chiseling the sculpture immediately!

Why are developers in 2022 concentrating on re-inventing a "key editor" in notation software? Only to have us pushing the mouse as we already did 20 years ago? Steinberg, Avid, MakeMusic, where is your AI-generated Flute that sounds and behaves like a flute? When NotePerformer can do it, why can't you? Well, you can't because obviously you are still thinking in 1996's terms. You are building a better hammer, can't you see? Why don't you join us? We, your clients are already there, in 2022. Come and follow us, it's interesting here.

AND TO ALL OF YOU developers of libraries: Just in case we are still being forced to push pixels going into the next decade of line drawing: We then need your libraries come with "Expression-Sets" that I can import into my notation program? No, I'm not talking about "Articulation-Sets", I'm talking about a container that comes with the flute sounds AND the description how the flute sample has to be treated by the program in order to behave like a flute and that I can then import into whatever program I'm using. Why do I have to draw all the lines myself? Why can't YOU do this for me? After all you obviously know how a flute has to sound, just in case I don't (I do, but not everybody does) - you recorded the samples. Your libraries are a collection of samples, fine. Now go one step further! And then get in touch with the big ones and make them integrate your Sound Containers! Or invent something else that gives me a perfect mockup (or even the master file, that's what many composers do with your libraries anyway) that sounds so natural that I get goose bumps, because I write better songs when the sound touches me - but without me drawing mile after mile of control changes in some software.

Would be very interested in hearing how the community sees this.

Have fun!


----------



## Inventio (Oct 24, 2022)

I have a different view and I am a fully classically educated composer, pianist and harpsichordist. 

When I am in the DAW, I like to play. And I like to have access to different instruments, sounds and possibilities. 
Not to simply emulate reality but to be expressive and have fun. That's why I prefer "playable" patches. 
In that place (Cubase for example), my goal is in the whole orchestrating and mixing with sounds. 

When I am in Dorico or on paper, it's a different thing to me, composing is more a classical activity where every sample have limitations, anyway, so I let my aural imagination work. 

I know it's a very personally dual position, where sometimes the two parts meet halfway, but I thought it could give another point of view to the discussion.


----------



## Vlzmusic (Oct 24, 2022)

Take a chill pill. You would sell your kidney in 1996 to get libs you have today.

P.S. And no, Noteperformer flute doesn't sound like a flute (yet).


----------



## pefra (Oct 24, 2022)

Hi Vlzmusic,

thanks for chiming in. Thanks also for the medical advise, but I can promise you I'm absolutely chilled. And of course I was exaggerating (if only a bit) to drive the point home. But you did get this, right? If you feel offended, sorry, that wasn't my intention.

Selling my kidney in 1996 - maybe (but probably not). But now it's 2022 and we are still working in the same way as we did back then. Don't worry, Dorico is fine, Cubase is fine, everything else is also fine except for the weather. The only thing is I cannot see where we are headed, and I guess in 2030 we will still have to draw our mile long expression lines in some DAW or notation program, perhaps in some Hyper-AI-Key-Editor developed by Elon Musk? While programs like NP or Staffpad are showing *one* of several ways of how it could be done in the *near* future. Want my disclaimer? Ok. No, I'm not connected to either one 

No, NP's flute does not sound like a flute, no library flute does, right? I know, I play the flute amongst other instruments  The point is I don't have to think about how some instrument must be programmed inside some software because NP or Staffpad or whatever is taking care of this, and the results are impressive. Now give me this with the *perfect* sound and voilá, here we go!

So that's just my opinion, but it's fine if you are happy with what we have now. Problem solved 

Any further ideas then?


----------



## Beyond4A (Oct 24, 2022)

I completely agree with the OP. This is something I think about all the time.

When I got into composing a few years ago, I went and bought a few good libraries, but soon after realized that when I press the keyboard, triggering the sample, it sounds great. When I would assign notes to them in a score, they sound lifeless and robotic. It was then that I realized that I have no interest/patience in drawing lines for every instrument in an orchestral score. Then I got Dorico and NotePerformer (for $130, which is insane in my opinion) and haven't purchased a library since. While not a fan of the strings, it just sounds good out of the box generally speaking.

This whole time, I can't for the life of me see why the future for all these libraries isn't working on a playback engine to play back all those wonderful sounding samples (a la NotePerformer). Instead of hundreds of sample libraries, there should be hundreds of companies competing against NotePerformer.

Speaking of 1996 . . . I imported a friend's piano piece from a MIDI file last year, and it looked like garbage. I just kept thinking, MIDI has been around for decades and they still can't figure out how to export/import MIDI from one computer to another without beams and sharps and flats all over the place. I mean, in 2022, it should just work. But alas, it often doesn't. Maybe in 100 years with MIDI 3.0.


----------



## Bollen (Oct 24, 2022)

The answer should be obvious, it's for the same reason sculptors still use hammer and chisel and not 3d printers. Furthermore, sample libraries tend to cover techniques, which are NOT represented in notation. 

I agree with your sentiment, but unless you and I can come up with a better way to 'sculpt' sound, they're not going to change it.

I guess the better question is why in 2022 are MIDI editors still so bloody clunky???


----------



## Beyond4A (Oct 24, 2022)

Bollen said:


> Furthermore, sample libraries tend to cover techniques, which are NOT represented in notation.


Sorry, could you clarify what you mean by that? Techniques are notated in the score. Apologies if I'm misunderstanding.


----------



## Bollen (Oct 24, 2022)

Beyond4A said:


> Sorry, could you clarify what you mean by that? Techniques are notated in the score. Apologies if I'm misunderstanding.


No, no worries. A simple example: on a woodwind, attacking a note with breath or tongued is up to the performer. You can indicate it if you desire, but it's almost never done.


----------



## pefra (Oct 24, 2022)

Bollen said:


> Furthermore, sample libraries tend to cover techniques, which are NOT represented in notation.


I disagree. What I see is that sample libraries are forcing us to put things in our DAW that can not be represented in notation. It's the other way around...




Bollen said:


> The answer should be obvious, it's for the same reason sculptors still use hammer and chisel and not 3d printers.


Yes, because nobody has built a working 3D stone printer as of today (not even sure about this). Build one and they will use it, I promise you. Yes, some still work with hammer and chisel, for the same reason John Williams still uses paper and pencil. I've learned this too, but I no longer do this. I want my computer do this for me. And I want it to give me a perfect acoustic representation of what I'm writing when I'm writing.




Bollen said:


> I guess the better question is why in 2022 are MIDI editors still so bloody clunky???



My question is why do we need MIDI editors at all in notation software in 2022? See above in my post...


----------



## Sarah Mancuso (Oct 24, 2022)

pefra said:


> Why do I have to draw all the lines myself? Why can't YOU do this for me? After all you obviously know how a flute has to sound, just in case I don't (I do, but not everybody does) - you recorded the samples. Your libraries are a collection of samples, fine. Now go one step further! And then get in touch with the big ones and make them integrate your Sound Containers!


I mean, if there was only one way of playing something on a flute, we wouldn't have this "problem", but music performance doesn't work that way. When you are using a virtual _instrument_, you are the performer. It's no different than playing a keyboard or synth, or an actual flute: what you put into it is what you get out of it.

My experience is almost always that the more a virtual instrument tries to outsmart me by doing what it _thinks_ I want instead of what I tell it to do, the less usable it is and the worse the results are.



> Or invent something else that gives me a perfect mockup (or even the master file, that's what many composers do with your libraries anyway) that sounds so natural that I get goose bumps, because I write better songs when the sound touches me - but without me drawing mile after mile of control changes in some software.


If your music isn't giving you goosebumps, that's not the tools, that's just your music.


----------



## Bollen (Oct 24, 2022)

pefra said:


> I disagree. What I see is that sample libraries are forcing us to put things in our DAW that can not be represented in notation. It's the other way around...
> 
> 
> 
> ...


I was going to reply to you, but @Sarah Mancuso said it best! And you're wrong about your first point...


----------



## Geoff Grace (Oct 24, 2022)

Man, if it's 1996 then I'm late for the studio!

I'd better sign off of America Online and get going.

Best,

Geoff


----------



## pefra (Oct 25, 2022)

Sarah Mancuso said:


> When you are using a virtual _instrument_, you are the performer. It's no different than playing a keyboard or synth, or an actual flute: what you put into it is what you get out of it.


Hi Sarah,
I know my virtual libraries CAN be my instrument. My point is that I don't want to play this instrument! The effort it takes is too much! I want my computer to do this, read my post. This is vi-control Notation Speak: Sibelius, Finale & Dorico, people are mostly talking about notation software. Please keep that in mind, that was my starting point.

I can promise you when you have to deliver a full score for an orchestra in 10 days you don't want to fiddle around with CC 7, CC 11, Modulation, Articulation etc. blah blah. At least *I* don't want to tell my DAW look, here I want you to transition from this note to that note by using this special legato sound that we have on the computer, you know, the one that comes when I hit note number 12 down there on my keyboard, you know, that special articulation that only this special virtual instruments understands that we are using right now, and please play it with a Velocity of 90, overall Volume 100, and oh yes, keep pan as it is, but change Rev to 30. Etc. etc. etc., all the while pushing the mouse, having to open a "Key"-Editor which can only handle ONE instrument at a time and so on and so forth. And when I delete this specific note my controllers are also gone and I have to start all over again. Right...

Instead I chuck my notes in, ditto all necessary articulations and playback instructions in Sibelius (no advertising, use what you want) or whatever, have it play through NP, and the result is good enough to check the composition, the levels of every instrument and the overall feel. That's what we are talking about. Now give me something like NP, no matter the company, on a high level, let's say on the level that our sample libraries already ARE (!) - here we go! The sound of our libraries is absolutely sufficient, but the playback system is not. That's my point. And the overall effort it takes to get a decent result, well - it's just too much, at least for me. It's fine for you? Fine 



Sarah Mancuso said:


> My experience is almost always that the more a virtual instrument tries to outsmart me by doing what it _thinks_ I want instead of what I tell it to do, the less usable it is and the worse the results are.



Right, that's why we need better virtual instruments and first of all better integration of these instruments into the host software. See my OP...



Sarah Mancuso said:


> If your music isn't giving you goosebumps, that's not the tools, that's just your music.



Sounds a bit childish, right? But draws some likes, so fine with me 

I was talking about SOUND, not my music. Can you imagine hearing a 10minute orchestral piece played back by Behringer's Deepmind and running around in your studio shouting GREAT! THAT'S IT!! Can you? Believe me, sounds do matter a lot when composing for orchestra.

But I'll bite: Who said my music isn't giving me goosebumps? It does every now and then, especially so when working with sounds that behave almost like a real orchestra. I also get goose bumps when people like my music when they hear it in cinemas, you don't know me. But I better go ghost now, you know


----------



## pefra (Oct 25, 2022)

Bollen said:


> And you're wrong about your first point...


Yes Bollen, I see. You clearly have the better argument. 

But feel free to elaborate...



Bollen said:


> I agree with your sentiment, but unless you and I can come up with a better way to 'sculpt' sound, they're not going to change it.



Well, I have at least formulated some ideas how to do this. It could be done. But you're right, they are probably not changing anything, and that's because most developers (or is it the marketing department?) are still thinking in 90's terms. See above...


----------



## Anton K (Oct 25, 2022)

pefra said:


> Well, I have at least formulated some ideas how to do this. It could be done. But you're right, they are probably not changing anything, and that's because most developers (or is it the marketing department?) are still thinking in 90.


I think devs are at a much higher level now than in the 90s. There are fundamental problems here.

There is no appropriate general file format for expressive digital scores at the moment. MIDI is very low level, MusicXML is suited as interchange format but not as much as native format for applications.
There is MNX as successor of MusicXML, which seems to be improving in some respects, but the progress is coming slowly and it has been in the making quite some time. So not a single appropriate file format.

And a possible solution would be extraordinary complex, there are so many points to consider, e.g. how is a trill played in connection with the measure and (possibly changing) tempo. In this spirit there are thousands of cases to consider.

It is easy to speak out some high level ideas, that is done in the Dorico forum also, but much harder to derive meaningful guides for implementation.

I think, the vendor with highest competence worldwide is Steinberg as creator of Cubase and Dorico, but the integration of both apps has not happened yet, although so many users would like more movement in this direction (me included).


----------



## thevisi0nary (Oct 25, 2022)

Don't agree with everything but thoroughly agree with the underlying point. The fact of the matter is that until there is some type of overall development standards that developers all decide to agree to, and until there is more integration between daws and samplers, this is the way it will be. Notation editors (good ones) feel fluid because the developer has control over the entire thing.


----------



## pefra (Oct 25, 2022)

Anton K said:


> I think devs are at a much higher level now than in the 90s. There are fundamental problems here.
> 
> There is no appropriate general file format for expressive digital scores at the moment. MIDI is very low level, MusicXML is suited as interchange format but not as much as native format for applications.
> There is MNX as successor of MusicXML, which seems to be improving in some respects, but the progress is coming slowly and it has been in the making quite some time. So not a single appropriate file format.
> ...


Hi Anton,

thanks for chiming in. I know this same old question: Do we give the customer what HE wants, or do we give him what WE are able to develop? I can clearly see your point and you're right. It's easy to formulate some ideas and shout them out. But on the other hand such ideas often come from the people who are using the product every day, and these users know exactly what they (!) are missing. But no user who isn't at the same time a developer can break this down into a roadmap that could be picked up by a company, that's also true. At least I myself can't.

I know Cubase inside out, I have written a 700 page book about it in 2000. And that was my starting point. My point of thinking is that we don't need even more features in our DAW / Notation Software.
Apart from the fact that today I still only use 5 % (if at all) of what Cubase can give me I'm also still using the same tools. Only on a more sophisticated level. The Key-Editor I'm using now looks way better, has definitely more features to it than back then, but wants me to work in the same old way I already had in 2000. That's not necessarily a bad thing, but it leads me to my point. Where are we headed?

I personally don't need Cubase or every other software developed much further, at least not with regards to new features, t.b.h. Just bugfix it, the rest is already there since years. But I want companies like Steinberg think in a different direction. What's the next step? Even better Key-Editor? What does that even mean? Features from Cubase into Dorico, features from Dorico into Cubase? Really? If I'd open a Midi File in Cubase VST 5 XT from 2000 and the same file in the latest Cubase version, chances are to my wife the GUI on the upper most level and the key editor would look almost identical (apart from the colors), and my wife's a trained concert pianist and musicologist with a degree.

What I would like to see is more integration between the software that PLAYS my composition and the software that GENERATES the corresponding SOUNDS. I guess Playback Engine is the keyword here. I can tell you for me the advent of NotePerformer was a milestone. Staffpad could also have been but I don't like touch systems. I haven't touched my libraries since NP. Now give me this on a high end level and we are almost there. At least with regards to the orchestral film scoring notation department. But if you think about it NP or something like this could also totally have different sound sets, why not have virtual synth sounds inside? Wavetables? There you have it. Now integrate this into my notation software and voilá. But of course other ways of working with DAWs are also still valuable, if someone wants to push pixels and edit expression lines - fine, give them the best tools you can develop.

But I don't think we as composers should be editing control changes in some editor in 2030.

cheers


----------



## pefra (Oct 25, 2022)

Anton K said:


> And a possible solution would be extraordinary complex, there are so many points to consider, e.g. how is a trill played in connection with the measure and (possibly changing) tempo. In this spirit there are thousands of cases to consider.



Anton,
just one add-on. Yes, I know, because I'm the one who also has to decide on this when working with sound libraries. I have to think about all of this to have my mockups reach a certain level. And when engraving the project into something real musicians can play it still sounds very different to what I heard from my computer. But that computer with its software can give me a representation of ideas in a way that impresses me and helps me invent music, because that's what we do.

I can only hope in the future the sounds I need every day can be computed as in generated and calculated, instead of only being played back out of a simple sample library. After all these years we are still so limited with libraries, yes, we can change volume and expression and such, but we haven't even touched changing the frequency content of a sample and having control over the inherit frequency spectrum in relation to a second sample of the same kind f.i., see above in OP. We are still only crossfading between prerecorded samples.

Long way to go, but it could make sense to alter the roadmaps a bit into a direction that people more informed than I am have already formulated.


----------



## Anton K (Oct 25, 2022)

pefra said:


> Long way to go, but it could make sense to alter the roadmaps a bit into a direction that people more informed than I am have already formulated.


what is this direction? I really don't know. One has to define actionable steps, which is not trivial here.
What could be first steps in the right direction? I am from time to time working on getting a musical performance in MIDI out of trills, and even this tiny problem is quite hard and will always be work in progress.

To get to the point you are envisioning, there need to be devs with very broad capabilities. Only to get into Audio programming in C++ is hard enough. And here one has to have also knowledge of trad. notation and of the art of getting a musical performance (e.g. rubati, phrasing, articulation)

Hi, Pefra,

thank you for your thoughts. I can only respond very generally but speak from my expertise as both a classical piano and organ player and software dev.

I would be delighted, if such a high end playback enginge could be made. Then I think quite a lot of actual daw users would not need much of the traditional daws anymore, as you say you only use 5% of Cubase.

But to get there, I guess, there must be an appropriate data format, could be open source oder behind company doors. I don't know if such a format is currently developed, I don't see anything near.
It has to be seen, how far the next version of NP will get. I think I have read it works (partly?) with machine learning (I don't know for sure), so a totally different path could there be implemented compared to the traditional MIDI handling in DAWs.

It would be enlightening, if music tech companies would give more info about their algorithms.


----------



## Anton K (Oct 25, 2022)

thevisi0nary said:


> ... until there is some type of overall development standards that developers all decide to agree to,


Important point


----------



## SZK-Max (Oct 25, 2022)

But the library goes in the opposite direction of what you hope. (They're evolving into The INSTRUMENT) If you follow your logic, the more the library evolves, the more backwards the times will be. From the conductor's point of view, they need 10000 over loops of RR


----------



## LatinXCombo (Oct 25, 2022)

*Regarding the comment: "Hack in your notes, enter p, ppp, f, fff, hairpins, whatever, and there you have it!" *

[TL;DR - MIDI is outdated and sucks, the hardware we have to enter the data sucks, but even if they were awesome that still wouldn't change the fact that the player needs to master the instruments he's playing, even if it's a computer. TANSTAAFL.]

There's a lot wrong or outdated with musical tech (_OMG MIDI, 1983, STILL USING IT, 40 years later! 127 levels of velocity! Oh, and we fear change so much that we're using velocity to control other things now! Because we don't have a choice!_)

But...

I see two problems here: (1) The Hardware Problem, and (2) The BMW 7-Series problem

(1) The Hardware Problem. 

The text of music is more of a guideline than a precise instruction. What's the difference between fff and ff? How do you translate "_Molto deliberato, freely at first_" into commands that a machine will understand? It's the players' (and conductors', maybe) interpretation that makes the difference.

You NEED precise control to pull this off. If you try to automate it too much you'll end up with whatever the developer thought you probably wanted. What automation does for you, it also does TO you. 

I feel that the hardware hasn't kept up with the need to draw and edit these curves with precision and efficiency. Seems like some sort of touchpad/stylus solution would be optimal but I don't see anyone talking about that. Breath controllers sound great (no, I haven't tried one yet) but that's only one part of the equation, and it kind of feels like the wrong solution for strings, you know what I mean? (Could be wrong, willing to entertain better solutions, naturally.)



(2) The "BMW 7-series" problem. 

The 7-series is a $100,000 car. If you buy one new, you will spend $100,000 on it, but you'll have a warranty and maintenance included for five years. If you buy one used for, say, $25,000, you'll be out of warranty and spend $75,000 on maintenance and repairs over the next five years. But you're going to spend $100k.

Same as when creating music that's supposed to be played by more than a couple of instruments.

To get an ensemble / orchestra to play music right, someone has to spend the proverbial 10,000 hours mastering the instrument. 

If the song calls for a three piece band, you can probably do it yourself. But when you reach orchestra level, well, hell, you either need an orchestra or you need a shortcut. 

Samples/Synth, they're just shortcuts. You're trying to circumvent the need to spend (10,000 hours * however many instruments you'd really need to make it sound right). They're getting really good, but at some level you're going to need to master the instrument.

It's a tradeoff -- you can limit the variables you can edit, but you'll be giving up quality. You can maximize the variables you can edit, but you'll need to spend more time mastering the nuances. 

Or, as Heinlein put it, there ain't no such thing as a free lunch.


----------



## Bollen (Oct 25, 2022)

pefra said:


> Yes Bollen, I see. You clearly have the better argument.
> 
> But feel free to elaborate...


I did and others have said it as well. I think the point you're missing is that it's NOT about technology, but rather art. Can I presume you're not an instrumentalist yourself? I spent 20 years mastering an instrument (woodwind) before I got into computers. Of those 20 years I spent 15 writing music and rehearsing it with real ensembles. So I know for a fact that there are hundreds if not thousands of musical 'elements' that are not included in the page i.e. written notation. The instrumentalists themselves make hundreds of choices of how they finger, attack, cut off, join and fluctuate every note. Then you have a conductor, a composer or both dictating overall things that are again, not on the page. It is _this _magnificent collaboration that makes music sound the way it does, the fact that it involves so many 'minds' and diverse musical tastes. When I started using software (notation) I immediately realised that it could not play _music_, despite having the most elaborate expression maps and playback rules. Partly because music is contextual, but mainly because it's artistic i.e. there is no situation standard enough to dictate: “in this situation play a note like this”. Consequently, much to our despair, I do everything manually. I don't let the software do anything, not even handle dynamics! At least in this manner I can approximate how a real ensemble would perform. 


pefra said:


> But you're right, they are probably not changing anything, and that's because most developers (or is it the marketing department?) are still thinking in 90's terms. See above...


I did see above and again it's not because _they _don't want to do anything about it. It's because nobody, not even ourselves, have come up with an alternative. Aside perhaps from some extremely advanced AI using neural networks, but the technology itself isn't there yet and it's not an area that music software developers are involved in. And even if the technology gets there one day, I would never trust a machine to execute human art.


----------



## Anton K (Oct 25, 2022)

Bollen said:


> So I know for a fact that there are hundreds if not thousands of musical 'elements' that are not included in the page i.e. written notation.


I agree. ---

There are varying degrees of what is written in the score. In Bach fugues there is not much besides the notes, but in later periods some composers mark abundantly many details for tempo, phrasing and articulation (Max Reger). Often this makes the score more difficult to read, and of course there are still degrees of freedom, e.g. in which curve one plays a long ritardando (I performed one of his main piano works op. 81, so I can say this from practical experience).

So even if 100% of a professional artistic performance are not possible by programs, quite a lot could be automated by software. But it is not sufficient, to interpret the symbols in the score. Often markings are only written once, but in many similar places one has to decide, if the markings should also be applied, perhaps in variations. The possibilities are wide.


----------



## pefra (Oct 25, 2022)

Bollen said:


> I think the point you're missing is that it's NOT about technology, but rather art.


Bollen,

I as the OP opened the thread to talk about technology.




Bollen said:


> Can I presume you're not an instrumentalist yourself?


I am. As I wrote somewhere in this thread.


Management Summary

OP is not talking about art. OP is not talking about how real musicians interpret or play the score on the desk. He is talking about how the process of getting a decent representation of his own compositions can be drastically enhanced by using appropriate software for the purpose of level matching of instruments, checking on the overall quality of the composition and also having more fun while in the process of composing. He suggested more integration between Notation Software and Sound Generator. He also suggested improving on or inventing of software in the way Arne Wallander did with NotePerformer.

Hookline: I don't think we as composers should still be editing control changes in some editor in 2030.





thevisi0nary said:


> ... and until there is more integration between daws and samplers...



Yes. Thank you.

Back to work. Have fun.


----------



## pefra (Oct 25, 2022)

Oh, I forgot:




Anton K said:


> What could be first steps in the right direction?




Take NotePerformer / Staffpad as a starting point.


----------



## Thundercat (Oct 25, 2022)

I didn't read every post so apologies if this was mentioned, but Straight Ahead Samples have VI's that use some amazing tech to recreate a perfect trumpet player, and other instruments. I think this is in the right direction and similar to what you are asking.

No it's still not NP but it's very realistic the way it interprets what you play.


----------



## Thundercat (Oct 25, 2022)

Sarah Mancuso said:


> I mean, if there was only one way of playing something on a flute, we wouldn't have this "problem", but music performance doesn't work that way. When you are using a virtual _instrument_, you are the performer. It's no different than playing a keyboard or synth, or an actual flute: what you put into it is what you get out of it.
> 
> My experience is almost always that the more a virtual instrument tries to outsmart me by doing what it _thinks_ I want instead of what I tell it to do, the less usable it is and the worse the results are.
> 
> ...


I hear you and agree with most of what you say.

But I disagree with your last statement.

To hear Pavarotti or (insert your fav artist) sing a single note is to send chills. The performance very much makes the piece. I can write the most beautiful music, but if the performance doesn't capture the essence of what I'm trying to say, it falls flat, every time.

The music can be amazing, but if the performance isn't there, no one would know it.

Just my $0.02.


----------



## pefra (Oct 26, 2022)

Thundercat said:


> I didn't read every post so apologies if this was mentioned, but Straight Ahead Samples have VI's that use some amazing tech to recreate a perfect trumpet player, and other instruments. I think this is in the right direction and similar to what you are asking.
> 
> No it's still not NP but it's very realistic the way it interprets what you play.


Thanks Thundercat,
I checked it and I think (as you said) it thinks in the right direction. The Brass is convincing. Now have them integrate this into whatever notation program we are using and bingo. Kontakt I don't like and don't want to fiddle around with. NP integrates and works completely in the background without me doing anything except for installing it once. Done.


----------



## pefra (Oct 26, 2022)

Hi LatinXCombo,

thanks for chiming in.




LatinXCombo said:


> *Regarding the comment: "Hack in your notes, enter p, ppp, f, fff, hairpins, whatever, and there *
> 
> The text of music is more of a guideline than a precise instruction. What's the difference between fff and ff? How do you translate "_Molto deliberato, freely at first_" into commands that a machine will understand?



How do I translate this into my key editor when I'm the one who has to do the work? It's either me who does this - or a software. Maybe the software could handle the task even better than I could. After all I have to sit there and draw endless lines of control changes. Have a look at the videos of Spitfire Audio (in action, or whatever they call it). Sometimes Apple Logic can be seen, with all the controllers hacked in that are necessary for a convincing representation of their libraries. Only thing I can say is this: what a mess. Yes, sounds good, but WHAT A MESS.

Talking of reality, every time the demonstrator starts his playback, all the sounds sound exactly the same  No real instrumentalist would be able to pull this off. So, when I work in my DAW whenever I copy a part from bar 9 to bar 17 I will have to dive in again at bar 17 to change all my controllers by a small amount. If I don't bar 17 will sound exactly like bar 9.

Software could easily handle this.




LatinXCombo said:


> (2) The "BMW 7-series" problem.
> 
> The 7-series is a $100,000 car. If you buy one new, you will spend $100,000 on it, but you'll have a warranty and maintenance included for five years. If you buy one used for, say, $25,000, you'll be out of warranty and spend $75,000 on maintenance and repairs over the next five years. But you're going to spend $100k.



There's either something wrong with your car - or with your comparison


----------



## pefra (Oct 26, 2022)

Add on

When reading through my post one could get the impression that I would prefer the big players like Steinberg doing something on this matter over contributors like Arne Wallander. No. I try to be as agnostic as possible regarding this. Whoever comes out with a high end sound module that gives me what I want (see OP) will get my vote.


----------



## Ivan Duch (Oct 26, 2022)

I feel like we are already at a good place. I can create some pseudo decent mockup in Dorico using a custom playback template with my libraries and Noteperformer.

That might be enough to give clients or myself a good idea of the music for approval.

From there it's either recording with real players or using sample libraries in a DAW. Either way, someone needs to take the time to perform the music. As others have said, I like to have total control over the performance. And the only way is to play it. Not to mention I like using plugins, synths and whatnot as well.

What's your end goal? If it's writing in notation and having a final product. I'd argue that every song performance would sound the same unless we start using advanced AI able to understand a conductor instructions. But If you're a concert composer why bother with the computer performance?

My guess is that by the time we reach that technological breakthrough we composers are going to be obsolete for the market. The big money will probably be going in that direction.

I would even say that performers might be the hardest to replace.


----------



## Thundercat (Oct 26, 2022)

Ivan Duch said:


> I feel like we are already at a good place. I can create some pseudo decent mockup in Dorico using a custom playback template with my libraries and Noteperformer.
> 
> That might be enough to give clients or myself a good idea of the music for approval.
> 
> ...


Spot on. Ta.


----------



## Bollen (Oct 26, 2022)

pefra said:


> Bollen,
> 
> I as the OP opened the thread to talk about technology.
> 
> ...


I think you misunderstand, we're both on the same page. I too desire what you desire, but I cannot think of a better way of doing it. I have never used NP, I have tried the demo after every big update, but I don't like how it performs my music, it sounds awful. So perhaps this where we diverge... You want more work in that area, whereas I still think manual editing in Dorico is the best we've managed so far. Music software is unquestionably behind most other technologies...


----------



## SZK-Max (Oct 26, 2022)

Rather than saying that the technology is behind, "scoring" is the last step. To evolve, we stop writing it. like a painter lays down his brush. In actual performance, it is very difficult to express with five dynamic symbols. After that, we have no choice but to communicate. So I have no choice but to replace the words with the Automation. If you want to skip that step, let AI Director do it, or connect Midi cable into a back of head...


----------



## pefra (Oct 26, 2022)

Bollen said:


> I think you misunderstand, we're both on the same page. I too desire what you desire, but I cannot think of a better way of doing it. I have never used NP, I have tried the demo after every big update, but I don't like how it performs my music, it sounds awful. So perhaps this where we diverge... You want more work in that area, whereas I still think manual editing in Dorico is the best we've managed so far. Music software is unquestionably behind most other technologies...


Hi Bollen,

Sorry if I misunderstood! I totally see your point, you being a trained musician over years (decades probably). I think where we diverge is that you are talking about performing what is a written score, and I'm talking of generating that score. And it's not the way NP sounds that I enjoy , it's the concept behind it. It makes my live easier, but I still can imagine better sounds knowing how a real orchestra sounds. But as I said, I'm absolutely tired of drawing miles after miles of control data, clicking in or recording articulation changes, drawing fades etc. blah blah. I still remember the days when we had the first VSL package, where we worked with one Cubase track per articulation , sometimes 5 or 6 for Violine I, same for Violine 2 and so on, and what you saw on the screen was nothing short of a total mess. Well, it hasn't changed much since then. We have folders now, though...

I already haven't touched my libraries this year, and as soon as someone comes out with a high end NP (could even be NP themselves ) my libraries will have to leave my studio.

Nothing sounds like the real stuff. But in the process of getting there I want something way easier than we have now. No more line drawing!

cheers


----------



## Electric Lion (Oct 27, 2022)

I think a lot of people are not understanding what you are saying or exaggerating your requests unnecessarily. 

From reading your post it sounds like you just want the software to do a better job of approximating real playing. You're not looking for it to be exactly like real life in every way. The people who are saying things like "but there are thousands of minute articulations players do that aren't written" seem to be missing the point. It's not about playing those thousand micro inflections, it's about playing what _is_ on the page with some intelligence. The example of note performer is an apt one. Sure it's not perfect but it does a very good job on inferring balance and musicality from the dynamics and articulations in a way that other libraries just don't. This is the heart of the problem as no one has time to do all that manually for a 60+ piece orchestra for hours at a time. Sure we can still tweak the final product if we want, but that should be the final 10%. It should be 90% of the way there on its own. What I think you're asking for is for library companies to build on this type of framework with their better sounds, and frankly I agree.


----------



## VSTHero (Oct 27, 2022)

In some ways though this is happening; there will be Noteperformer 4, and MuseScore just released a Beta with a library using same concept and quality as Staffpad, which means likely Dorico will in the future add their own built in library similarly pushing the quality up (you can already make your own notation to playback system using all your libraries with Dorico if your willing to go that deep, Ed B. had some examples). Meanwhile for folks who really want to sculpt every element, highly controllable libraries will continue to be released.


----------



## Page Lyn Turner (Oct 27, 2022)

I’ve made an orchestration some time ago in Finale for full orchestra and choir, 370 bars. After loading the MIDI file in Logic, I was looking the screen with despair  You have to assign sounds to every staff (piccolo, flute….Dbass), then work on every instrument: few bars of staccato sounds there (accents and randomize the other notes), then a legato phrase etc etc only to realize you worked on 20 bars and have few thousands left! (26 staves x 370 bars =13320 bars, without the empty ones let’s say 7 thousands bars) 
So yes, a “High-End NotePerformer” that can deliver the 85% of the final sound in Dorico is a nice thing to have! Of course, for trailer music or short 2 minutes pieces is easy to work manually.


----------



## pefra (Oct 28, 2022)

Electric Lion said:


> I think a lot of people are not understanding what you are saying or exaggerating your requests unnecessarily.



Yes, and that's partly because I was unnecessarily beating around the bush for whatever reason. Trying to be funny, or whatever. My fault. 






Electric Lion said:


> From reading your post it sounds like you just want the software to do a better job of approximating real playing. You're not looking for it to be exactly like real life in every way. The people who are saying things like "but there are thousands of minute articulations players do that aren't written" seem to be missing the point. It's not about playing those thousand micro inflections, it's about playing what _is_ on the page with some intelligence. The example of note performer is an apt one. Sure it's not perfect but it does a very good job on inferring balance and musicality from the dynamics and articulations in a way that other libraries just don't. This is the heart of the problem as no one has time to do all that manually for a 60+ piece orchestra for hours at a time. Sure we can still tweak the final product if we want, but that should be the final 10%. It should be 90% of the way there on its own. What I think you're asking for is for library companies to build on this type of framework with their better sounds, and frankly I agree.


Exactly, that's what I wanted to say. Thank you very much.

I'm also realistic enough to not expect real life performances from a software like NP. I also don't see real life performances from virtual libraries, sometimes they are coming close, but... There must be some other way than drawing mile after mile of control changes, articulations, volume curves, expression curves, and that's not even taking in account mixdown and mastering. 

I think this will happen sooner or later, and most likely already not far from now. It's the next logical step. NP and StaffPad have shown that it can be done.

Now someone has to improve on that.


----------

