For Those of Us Considering Jumping from Kontakt to HISE

d.healey

Senior Member
That makes sense. Just to clarify, is it still okay to drag samples in from the folder on my desktop, rather than using the files section on the left sidebar of the HISE interface, as long as I'm dragging from the Samples folder that's inside the Project folder on my desktop?
Yes

The one group issue I'm concerned about is that if I use the Round Robins to create all the necessary "groups" I'll need, can those RR "groups" each have their own volumes, envelopes and other settings? I'm assuming that would be accomplished through scripting (e.g. if RR Group 4, use Envelope 2 and mute all other envelopes)?
This is where HISE's tree structure falls down (currently). Groups have almost no routing. As you guessed you'll need to script this by turning modules on/off or adjusting their parameters. If it were possible to have more than one group active at a time (other than when using group FX) this could be really annoying.

One other question - One thing that looks really appealing is that it appears that we can apply individual scripts to any of the elements of the instrument. Obviously that's nice that we can have that control, but what makes that even more appealing (to me, at least) is that these scripts can be individually placed where I use them, rather than in a giant master script. (The smaller I can make the master script, the better.)
Yes, this modularity is one of the greatest things about HISE. You can have a bunch of little scripts instead of one massive script. And you can reuse your scripts easily in other projects. I have a bunch of little scripts I use often here. The largest script in your project will probably be your main interface script, generally you should avoid doing any MIDI processing in this script to keep all of the UI stuff out of the real-time thread, but as always there are exceptions and it depends on what you're doing.

So my question is - Is there a PGS equivalent so that these individual scripts can know what the master script is doing? For instance, in my wordbuilder example, can the master script tell an envelope script, "Hey, we're starting with an "s" instead of a "t", so make the Attack longer."?
Kind of. You can have global variables to share data between scripts. But there is no callback for when these variables change. Every script is also capable of accessing the controls of every other script, and this does trigger the accessed control's callback. With a combination of these two things and a bit of lateral thinking you should be able to have all the inter-script communication you need.

My current project has loads of scripts that interact with each other. Sharing data through global variables and carrying out actions based on UI control values which are set by other scripts.
 
OP
Mike Greene

Mike Greene

Senior Member
Moderator
Thanks David. (And thank you Lindon, too.)

One more question (sorry for all the questions, but I need to make sure there are no "deal-breakers" lurking, before I actually commit to doing this) - I think you or Lindon mentioned before that HISE doesn't do wait commands, so we would need to use a timer callback for that.

In a wordbuilder instrument, obviously I use the heck out of wait commands, since I start with a consonant, then wait, then if there's a second consonant (s followed by t, for instance) play that second consonant, wait, then start the first part of the vowel, wait before starting the end part of the vowel, etc. I can make a good guess how to do this with whatever the equivalent to an "on listener" callback would be. (I think you called that a "timer"?) So my questions:

1. How short a time can that timer (listener) callback be without causing problems? Is a 5 millisecond loop safe?

2. I assume there can be "play note" commands in the timer callback?

3. Now that I think about this, I wonder if this could be accomplished with delay effects? For example, the user plays a note and the current syllable is "Stu." So the script plays three notes all at once:
a. One is a note where only the s group is activated, then
b. A second note (without a wait) is a note where only the t group is activated, but sets the Delay effect on that "note" to 50 milliseconds so we don't hear the "t" until after the "s", then
c. A third note (no wait command) is a note where only the "oo" group is active, but sets the Delay effect on that "note" to 80 milliseconds, so that in reality, it would occur 30 milliseconds (80 - 30) after the "t."
 

d.healey

Senior Member
The lack of a wait statement threw me too when I first started with HISE.

1. How short a time can that timer (listener) callback be without causing problems? Is a 5 millisecond loop safe?
Timers are limited to a shortest speed of 40ms. If you need faster then it's time to nag Christoph.

I just had a thought. You could hack a wait statement together using a loop and the Engine.getUptime() function. This won't be real-time friendly though so might not be a good idea. But something to play with!

Update: Doesn't seem to work, causes execution time-out error :(

2. I assume there can be "play note" commands in the timer callback?
Yes. I use this a lot.

3. Now that I think about this, I wonder if this could be accomplished with delay effects? For example, the user plays a note and the current syllable is "Stu." So the script plays three notes all at once:
a. One is a note where only the s group is activated, then
b. A second note (without a wait) is a note where only the t group is activated, but sets the Delay effect on that "note" to 50 milliseconds so we don't hear the "t" until after the "s", then
c. A third note (no wait command) is a note where only the "oo" group is active, but sets the Delay effect on that "note" to 80 milliseconds, so that in reality, it would occur 30 milliseconds (80 - 30) after the "t."
Sounds like it should work, I've not tried changing the active groups in a timer. Give it a try and report back :) You might be better off separating your samples by velocity rather than group, unless you need the velocity range for some other purpose.
 
Last edited:
OP
Mike Greene

Mike Greene

Senior Member
Moderator
Timers are limited to a shortest speed of 40ms. If you need faster then it's time to nag Christoph.
Checking my current KSP script, some of my shortest wait times are 2ms, so 40ms would be way too long for certain consonant clusters. I think the delay trick should work, though. I'll test the method when I have a stronger footing with all this.

You might be better off separating your samples by velocity rather than group, unless you need the velocity range for some other purpose.
Definitely. There are almost 40,000 samples (just one mic position, so it's legitimately 40,000 samples), so we're squishing them in with all sorts of tricks. :grin:
 

Lindon

VST/AU Developer
Checking my current KSP script, some of my shortest wait times are 2ms, so 40ms would be way too long for certain consonant clusters. I think the delay trick should work, though. I'll test the method when I have a stronger footing with all this.


Definitely. There are almost 40,000 samples (just one mic position, so it's legitimately 40,000 samples), so we're squishing them in with all sorts of tricks. :grin:
if you want delays down to 1 ms you can do this - simply add a "Simple Gain" effect to the module you want (in this case your sampler) - you will see the Simple Gain effect has a delay parameter- which you can set with your script...

Code:
const var MyGain = Synth.getEffect("MyGain");


and on some event....


MyGain.setAttribute(MyGain.Delay, somevalue);
in this case your script might fire all your "voice partials" at the same time having set a ms delay for each sample so they play back sequentially...never done it - never needed 1ms delay - but it should work I think.

Additional nice thing is you get sub-ms delays you need 14.33 ms? no problem...

..and you can keep all your "voice partials" in an object array, with their sample map, their note number, and their delay amount...for ease of access..
 
Last edited:

d.healey

Senior Member
Thats probably even better - as you can use the same "voice partial" just delay it different amounts as needed in each phrase..
I'm not too sure how useful it will be in this particular situation. It only works with Message events, not played notes. It might be possible create an artificial message using a message holder though.

Edit: I just remembered there is also Synth.addNoteOn() and Synth.addNoteOff() which have a timestamp parameter. That'll probably do the job.
 
OP
Mike Greene

Mike Greene

Senior Member
Moderator
if you want delays down to 1 ms you can do this - simply add a "Simple Gain" effect to the module you want (in this case your sampler) - you will see the Simple Gain effect has a delay parameter- which you can set with your script...
Perfect. So it looks like I have some good options. This one appeals to me most (at the moment, at least), because it's easier for me to understand. Plus I think it might make for an easy way for me to organize the samples:

The instrument has Initial Consonants, Second Consonant, Vowels, Closing Vowels, and three Ending Consonants. So I could make one "Sampler" for each of those categories. That would be 7 Samplers. So for each syllable, the script would send the appropriate delay parameter to each Sampler, then simply play them all at once. (Obviously muting the Samplers that aren't needed, since most words don't have that many consonants.)

I tried this just now and ... it works! It's just static settings, with no scripting (I'm setting the delays manually), but making the script control those delays will be easy. One snag is the maximum delay in Simple Gain is 500ms, which might not be enough in certain circumstances. So I also tried the regular Delay Effect and that gives me plenty. So one way or another, this will work.

This is a whole lot of questions I'm asking, so I really appreciate your help with this, guys! I gotta say, HISE is very appealing, because it definitely has some advantages over Kontakt. (And vice versa, of course.) Before diving in, I need to make sure it's capable of doing every thing I'll need, though, which is why I'm asking all these questions.
 

José Herring

Senior Member
It got so technical that I got lost so forgive me if this has been answered.

So say you go with HISE, can it then be locked and distributed as a player type instrument or will the end user need a copy of HISE to open your protected samples?
 
OP
Mike Greene

Mike Greene

Senior Member
Moderator
It got so technical that I got lost so forgive me if this has been answered.

So say you go with HISE, can it then be locked and distributed as a player type instrument or will the end user need a copy of HISE to open your protected samples?
It would be self contained. (The customer doesn't need HISE. The customer won't even know HISE was used in the creation.)

Many people (including me) prefer libraries be released in Kontakt, since it's so easy and reliable. For instruments like Realivox Blue and Hip Hop Creator, though, I think releasing in a self-contained player would actually be an advantage. Many of those customers don't know nothin' about Kontakt (they ain't VI-Control people), so many are confused and annoyed that they have to load two things (KPlayer and then my library) to make things work.
 

Lindon

VST/AU Developer
It got so technical that I got lost so forgive me if this has been answered.

So say you go with HISE, can it then be locked and distributed as a player type instrument or will the end user need a copy of HISE to open your protected samples?
You build your instrument in HISE - just like you would in Kontakt - but then you select an export option and HISE builds you a deliverable that you can send to your user. Export options are:

on Windows - VST2, VST3, AAX, Windows Standalone exe
on MacOS - VST2, VST3, AU, AAX, Stand Alone Mac App

You can also build for deployment on iOS(iphone or iPad)
 
OP
Mike Greene

Mike Greene

Senior Member
Moderator
I have one last question - One of the main reasons for me doing this is that for my wordbuilder instrument, I want to give the user the ability to type phrases directly from their QWERTY keyboard. (Which Kontakt doesn't allow.)

I have an English pronunciation dictionary database (yes, it's legal - boy, that was a challenge!), so I want to give the user the option to type in English, rather than phonetically. So my instrument will look at each word the user types, then scan my database and hopefully find a match, and use the associated pronunciation/phonetics for that word.

David told me previously that this would be possible using a JSON file. I have some familiarity with those from some Python coding that I've done, although an English pronunciation dictionary would be waaaayyyy bigger than anything I've ever done before. So my question is - is there a size limit to how big a JSON file can be, and still be totally accessible to HISE?

Assuming size isn't an issue, is there anything I need to be careful of, regarding formatting of a JSON file for this? Here is a JSON file from a Python script I made:
Code:
{"Window Size": "1201", "initial frequency": "87", "vowelFolder": "/Users/AAA/Desktop/SpliceFolder"}
That file has all the tuples on one line. For something as large as a dictionary, that will obviously be a mess, so is it okay to put each tuple on its own line, like this? (Forgive the newbieness of that question.)
Code:
{"Window Size": "1201",
"initial frequency": "87",
"vowelFolder": "/Users/AAA/Desktop/SpliceFolder"}
Also, the dictionary I'm using lists pronunciations like this:
DUCK: D UH K

So could the JSON file look like this? {"DUCK": "D UH K"}

Or does each of those vowels and consonants need to be an individual element? Like this: {"DUCK": "D": "UH", "K"}

Or another option is that I could assign a number to each vowel/consonant, then combine them into one big number, like this: {"DUCK": "120487"}

{I realize this is getting pretty far into the weeds, and I may be pushing the bounds of "free advice." So I understand if this beyond what anyone gets into.)
 

d.healey

Senior Member
I'm not sure if there is a size limit imposed by HISE, we'd need @chrisboy to jump in and let us know.

With JSON just be careful to format it correctly. If you miss a quotation mark, comma, colon, etc. then it won't work when you load it into HISE and you probably won't get an error message which will be confusing.

Depending on the text editor you use for your JSON (I use CudaText) you might be able to install a JSON linter which will automatically validate the JSON. Or you can paste it into a site like this one.

The JSON can be written across multiple lines, I do this all the time, makes it much easier to read. I was using external JSON files in my livestream yesterday, video's still on YouTube ;)

There are a few ways to format DUCK. I'd probably use an array, like this:

{"DUCK": ["D", "UH", "K"]}

Then you can access each part by its index.

JSON can contain a mix of object, array, character, string, and numbers, as long as it's all contained in one overall object.
 

chrisboy

Active Member
The JSON import should be fine though, I am just using the default JUCE JSON parser implementation which is about as fast as you can get for parsing any data format and I know some developers use a copy protection scheme where they load about ten thousand license keys as JSON file; whether this is a sane approach is another question :)

The real question is how to format the JSON so that the parser has to do as little as possible (because a dictionary with let's say 50.000 words is not trivial), and I would go for a whitespace separated string as value like this:

{
"DUCK": "D UH CK",
//...
}

the array approach that David mentioned might look cleaner, but parsing these arrays might turn out to be the bottleneck if you have tens of thousands of them (not that there is a hard limit but it might result in your plugin taking 3-4 seconds longer to load).

You can later still get an array just by calling commaSeperatedString.split(" "), but this way you defer the array parsing to the latest possible moment.
 
OP
Mike Greene

Mike Greene

Senior Member
Moderator
Thanks Chris. There are indeed tens of thousands of them, and I can see how parsing a file that big could be time consuming, so I'll start with your "D UH K" approach. (Actually, I'll start with just 10 words and troubleshoot that, before I start formatting 50,000 entries for a script I haven't even written yet. ;) )

That's the last of my preliminary "deal-breaker" questions, so I think this will work, so it's time to start diving in.

Thanks guys!
 

Ross Sampson

New Member
Thought I'd chip in with my getting started day today for those considering with little to no coding background as there are a few terms and things to get your head around before even opening HISE (Xcode, IPP, compiling, JUCE, GitHub, repository) and pretty much all of them were unfamiliar to me.

To access the latest version of HISE you need to download the source code from GitHub and then compile the code to create an executable application version of HISE (correct me if I'm wrong). This isn't too difficult to do, but a little daunting if you've never done this before. On mac OS you need XCode and recommended is Intel Performance Primitives (IPP). I think a reasonable analogy would be writing an essay in Microsoft Word being writing the code, and our 'executable' being a physical book version, with printing being the compiling and the printer being XCode... maybe?

What is compiling? @d.healey & @chrisboy please feel free to chip in with a better explanation but here is an explanation from Reddit that seems ideal:

"As you may know, everything in a computer is represented by a series of 1's and 0's (which themselves represent high and low voltages on transistors, but that's a topic for another time). When the computer runs a program, the program itself is made of a bunch of 1's and 0's.

However, since we still need humans to write our programs, putting everything in 1's and 0's (called machine language) would be very difficult. So we made higher level languages like Java and C# to write code in. These languages look a lot more like English, so they're a lot easier to write and maintain.

When you compile code, the compilor (usually another program) takes the program the human wrote, and converts it into the program the computer can understand (i.e. converts from Java to machine language). The very short version could be, yes, compile means to make the code executable."


So at present the latest and most up-to-date version of HISE is presented in source code and you need to follow the steps helpfully presented by David Healey here to compile before you even open HISE.

So you get SOURCE CODE > COMPILE (via steps linked above) = then you have HISE to open

Downloading XCode: Requires an Apple developer account which is easy to set up - https://developer.apple.com/xcode/

Downloading IPP: Requires an Intel account which is also easy to do. One thing I'm not sure of is what elements of this need installing. Will do a it more research once I've got Mojave installed tomorrow.

How they all communicate I'm not too sure from a complete layman's perspective - my understanding is XCode is a development tool, or an environment for multiple development tools, in which to create apps for the Mac OS and OSX platforms, and when compiling HISE this is used to do just that; it's part of taking the HISE source code and creating an 'app' form on Mac OS and OSX.

GitHub: My understanding of GitHub is it's a website where people can share code and projects easily, including versions and stages of development of the projects making it especially easy for people to collaborate and/or build on other projects especially open source ones. One thing I'm not sure about is what 'git' is? I know what it refers to here in the UK, but not in the coding sense... If anyone could chip in with what the 'Git' in GitHub means that would be cool.

Repository: I think is the term given to a location where code is stored, maybe just like a folder, but specifically for containing core code? Anyone able to clarify that more for us laymen?

- - - - -​

The following is very specific to me, but thought I'd share in case it's helpful to someone else and only really applies to anyone running an old mac; I have a 2012 Mac Pro running High Sierra. This limits me to older versions of XCode and IPP.

The latest version of XCode is only compatible with a minimum of Mac OS 10.15.2 onwards, so it's a no go for High Sierra. It seems XCode 10.2 can be used in High Sierra with some editing of XCode files which makes things even more fiddly. It seems relatively simple to install older versions of IPP, but I figured going round-about-ways from the start isn't ideal for something potentially so crucial so after looking at ways to install older versions of XCode and which ones are okay for High Sierra would recommend unless you really, really have to, probably a good idea to err on the side of upgrading whatever you need to to get Mojave+ (8 years has been a mighty fine run for this ol' machine), though maybe @chrisboy could clear any of that up i.e. any issues using older version of IPP or XCode to compile?

That leaves updating Mac OS to Mojave which can be done on Mac Pro 2012s with the right graphics card, see here. I've ordered the Sapphire Pulse Radeon RX 580 to install which means Mojave is, or at least was until recently, officially supported (I think however Apple don't officially support Mac 2012s running Catalina).

Of course 8 years is a long time to have a machine and there are other things cropping up requiring later versions of Mac OS so it's probably about time to upgrade anyway, but seems the graphics card upgrade might get another couple years out of it. And going back 15 years a computer older than 6 months was basically ancient so it's good times.

Coming from a place that isn't familiar with GitHub, repositories, XCode etc, thought I'd write a little something about them with the hope those who are familiar with them might help clarify any definitions and explanations to make it all more accessible to the laymen.