Frequently Asked Questions

Here we answer all the frequently asked questions.

Contents
What is Constrained Random Generation (CRG)?

Constrained Random Generation is the name of the algorithm used by Opus+ at its core to make every compositional decision. This is a very simple process akin to a composer making every compositional decision by throwing a weighted dice; the weights of the dice (the constraints) can be expressed as arrays of integers. This allows Opus+ behaviour to be altered 'probabilistically across a smooth continuum of constrained behaviour' rather than abruptly i.e. either a constraint is applied or it's not.

For example if we worked within a rule-based system and had a boolean rule specifying whether crotchets are permitted in a particular bar - lets use bar #9 as an example. This rule can only be true or false; either the user allows them in bar #9 or they don't. The rule is rigorously applied. However, if we express this same constraint as a weight - lets say as a percentage - then the user can express the constraint with a certain probability. A setting of 0% means there will never be crotchets in bar #9, 100% means there will always be, and 50% means there will be sometimes on average half the time. There are 100 gradations to this constraint in between these percentage limits and this 'smears' the either/or constraint across a smooth range of possible values.

Using integer weight arrays to express the constraints rather than a rule-based approach, also makes it very easy to perturb the constraints themselves at runtime; in effect rule re-writing on the fly. And again the constraints can be re-written to change smoothly in a continuum rather than abruptly.

The CRG algorithm is explained further in the Opus+ Conceptual Overview document

Why does Opus+ output LilyPond markup files?

There are several reasons why LilyPond is the best output format for Opus+:

  • Simplicity: to properly review the composition it's good to have a score in traditional music notation, and an audio file of some description so the piece can be heard. While there are java libraries that offer some of these features, generating LilyPond markup is significantly easier than any of the alternatives, particularly since they must both correspond correctly; the MIDI must be a rendition of the score.
  • Quality: the graphic quality of the sheet music produced by LilyPond is second to none. Agreed, Opus+ does not do justice to the potential offered by LilyPond, but using LilyPond means that although it's only a sketch of the composition, the music engraving is of the highest possible quality.
  • Flexibility: having a LilyPond file means there is another potential point of intervention for the user to edit the material produced. The LilyPond file created by Opus+ can be edited directly by any LilyPond literate user as an alternative to, or complementary to, importing MIDI files into music studio software for editing.
Why was Java used as the implementation language?

Java was chosen because it is a very easy programming language, but at the same time very powerful with extensive libraries that cover every eventuality. This means that whatever feature might be necessary for Opus+ in future, we judge that it will be possible from within the Java language, without recourse to other technologies. Also, Java is used a great deal at university under-graduate level and by schools, and we wanted to provide a useful programming library for this innovative user base. We also judged that, unlike C++, learning Java was much more likely to be something students of composition would either know, or be prepared to learn.

Java was originally owned by the Sun Corporation who were acquired by Oracle Corporation. Java is developed through the Java Community Process (JCP) a public consultation process which is democratic and open.

Originally Sun hesitated to standardize Java because of the nefarious activities of Microsoft which aimed to sabotage the Java standard and platform in various ways, using anti-competitive strategies, in order to reassert their control over the programming language market for their own profit. Sun aborted the standardization process to protect the Java standard from Microsoft. However, as of May 2007, in compliance with the specifications of the Java Community Process, Sun relicensed most of the Java technologies under the GNU General Public License. So its as near a public standard as possible, and it is freely available.

Other considerations are platform independence and execution speed. Java can run on nearly any platform, from Intel PCs, Macintosh and Linux through to Solaris, HP - in fact anywhere that Opus+ is likely to be installed. In terms of speed and scalability Java was ideal for a stand-alone single user application like Opus+.

How fast/scalable is Opus+?

We test the performance of Opus+ at every stage of development and it is really fast. The time it takes to generate a composition will depend on the amount of work Opus+ has to do; generally the longer the piece and the more instruments involved the greater the time. However, it also depends on the way the music is generated. Opus+ can use different algorithms at different points in the process, and these can be switched and combined in many different ways, and some algorithms are much more complex than others, and so take more time to run.

To give some idea though, a simple 'random notator' which generates each voice progressively bar-by-bar at random takes about two-and-a-half minutes to generate 250,000 bars of piano music on our development machine. This gives a rate of around 100,000 bars a minute, or in excess of 1,600 bars a second! Played at a rate of 80 beats-per-minute, this gives: 80/4 = 20 bars per minute, or 1,200 bars per hour, or 28,800 bars per 24 hour day. The total duration of this musical piece would be 8 days, 16 hours and 19 minutes of continuous playing!

Why is the MIDI produced so poor?

Initially the main effort in the first version is to produce musical scores, with a view to the music being performed by human musicians. This is simply because the effort required to generate scores is far less, than that required to create MIDI performances, as a composition contains less information than a performance. At Opus+ we are also biased in our musical tastes towards music, skilfully performed by real musicians, on real instruments, rather than synthesized music!

However, as this project has progressed we have realized there are some really interesting ideas in the area of 'impossible compositions', music which is just too difficult for humans to play themselves. Such compositions would require some form of MIDI representation in order to be heard at all, and in these cases the score is of secondary importance. It is possible to import the raw MIDI produced by Opus+ directly into music studio software such as Apple Logic Studio Pro or Garage Band. Simple rough mixes consisting of little more than a few tweaks: selecting MIDI voices for each instrument, setting the relative volume between each instrument, adding a little reverb to each track and panning the instruments across the stereo image, have a remarkable effect, and really bring the composition to life. In fact the MIDI having little in the way of dynamics actually makes it easier to play with after importing into a DAW because all the volumes of every sound are identical.

When the composition module of Opus+ is completed, it is our intention to continue research into the relationship between composition, performance and production and further improve the quality of the MIDI produced. In theory it should be possible to take a composition and produce a synthesized performance that can pass as a real, live rendition. But from our perspective this is still in the future, and in the meantime, other researchers may advance technology in this direction, so we have less work to do.

Who owns the copyright of the music generated by Opus+ ?

All things being equal, you own the copyright of any music generated by Opus+ on your own machine, and our license agreement only covers the software itself, not the music it creates. There is not even a requirement that you indicate that Opus+ was used to compose all or part of any piece; you can use Opus+ in secret if you wish. However we are keen to promote this project so we ask that wherever a piece that is wholly or partially composed using Opus+ is performed or appears in a public context it is made clear on all media and announcements. Obviously if you generate a very popular piece of music, and make a lot of money, we would appreciate a contribution to our on-going research efforts!

Who owns the copyright if Opus+ by chance generates a composition already written by another person?

This is theoretically possible, but incredibly unlikely. We do not profess expertise in copyright law, but we would expect that the prior copyright takes precedence. This means that the person who first composed the piece owns it's copyright.

What is 'found' music?

The idea of 'found music' is where an interesting composition is simply discovered out of the blue and is treated as a complete, (almost) finished, entity in its own right. Music that is 'found' is not necessarily 'finished' in the sense that it might benefit from orchestration, arrangement and production just as any other composition might. This idea is important for Opus+ because the proportion of generated material that can be simply used 'as is', is a very good measure of its quality, even though as a measure it's not properly quantifiable because of the high dependence on the musical tastes and creative intuitions of the human user.

What are the guidelines for mixing Opus+ compositions, for example in Apple Logic, so they retain the status of 'found music'?

The status of 'found music' means that the generated composition must not be edited in any way, and imposes constraints on the production and mix-down process, so that a high degree of similarity or congruence is maintained between the generated composition and the final mix. The number of potential rule-sets defining this similarity relationship is enormous, but we choose to work with the one described below, which preserves the original order of musical material, so we can clearly hear its musical lineage from Opus+, but without limiting creativity at mix-down too much.

The rules we use are that, new instruments can be assigned to different voices, and the original voices can be arbitrarily shared out 'vertically' between the new instruments. This includes copying or doubling parts originally played by a single instrument as long as the few bars involved are only copied vertically. Standard studio effects such as equalization, echo, reverb, distortion, scintillation, panning, humanization and the like are allowed, and it's even permissible to change the octavation of musical phrases, for example if a treble part is moved onto a bass instrument, or even to drop voices out for a few bars. Also the volume and tempo can be changed globally, and locally for example to produce diminuendo and ritardando. These actions all concern 'production' or 'orchestration' and would commonly occur in the normal process of recording any composition, and are not regarded as part of the composition per se.

However, tampering with the actual fragments, correcting 'bum' notes, moving them horizontally through time, or writing new fragments are specifically disallowed. These actions materially affect the composition itself, and the music can no longer be said to be 'found'.

Of course, our rules are entirely arbitrary but they adhere to the general principle of 'the fewer changes the better'. If you can particularly avoid vertical part sharing, voice doubling and drop outs the 'compositional congruence' with the original will be very high.

What steps are necessary to import and arrange an Opus+ generated MIDI file into Apple Logic?

1. Drag n'drop the MIDI file produced by Opus+ into a new Logic project 'arrange' area.

2. Delete the extra blank 'global' track if one is created.

3. Choose preliminary MIDI instruments for each track.

4. Open the real Logic Global Tracks and insert time signature changes throughout the piece. Refer to the score PDF file generated by Opus+ to see what these are and in which bars they occur.

5. Optionally repeat for Key signature changes (not tested)

6. Split the single long green tracks into individual regions at fragment boundaries where possible.

7. Optionally colour-code fragments which are identical, so the structure of the piece is visibly apparent.

8. Either work by moving regions vertically between tracks as needed, or duplicate entire tracks for related MIDI sounds i.e. Oboe Legato and Oboe Staccato, and mute whole regions to give a broad overall structure to the piece. Either way, do this with an eye/ear to thematic development.

9. Now work within each region and mute individual fragment/phrase notes to remove chords from voices that can't play them, and/or distribute these chords evenly between different instruments. Mute notes to offset fragment/phrase onset so changes of voice do not monotonously coincide with region boundaries.

10. Move regions vertically between instruments/voices to give variety. Optionally transpose parts by octaves if, for example, a treble part is moved to be played by a bass instrument. Using only octave transpositions will not alter the harmonic content of the generated composition, and this is important to retain the status as 'found music'.

11. Select each track in turn, open the Piano Roll Editor and choose Transform/Humanize. Use the defaults and click 'operate only' between two and four times. (Bass parts usually need less it seems, as these must be more regular to keep the time properly).

12. Pan the voices across the stereo image to give consistent balanced natural sounding placement, unless of course you want something more freakish!

13. Set up an Aux channel for a global reverb environment. Use Space Designer as this is probably the best and there are a huge number of excellent pre-sets on the #default drop-down menu and these are more than adequate for 'all but the the most discerning'.

14. Open global tracks and change the tempo as needed throughout to emphasise crescendo and decrescendo.

15. Now use mix automation in the usual way to bounce. We always bounce to mp3 format with the following settings:

  • Start = 1 1 1 1.
  • End = one or two bars past the end of the piece to allow for reverb tails.
  • Mode = Offline.
  • Add Effect Tail is selected, (which adds a little extra reverb at the very end).
  • Normalize = On, (this automatically sets the global output level such that the very loudest transient encountered throughout the entire piece is at 0db, which guarantees maximum output level with zero unwanted distortion).
  • Bit Rate Mono = 80 kbps, (this is the default and is unused so far as we know).
  • Bit Rate Stereo = 224 kbps, (which is slightly better than CD quality of 192 kbps).
  • Variable Bit Rate VBR is selected, (as this uses the available dynamic range more effectively during complex passages).
  • Quality = Highest.
  • Use Best Encoding is selected.
  • Filter Frequencies Below 10Hz is selected, (this allows more bit-space for the audible range).
  • Stereo Mode = Joint Stereo, (this is the default and as yet we don't understand the differences here, they are not described properly in the Apple Logic docs (probably because the differences, if discernible, are impossible to describe adequately; nor has this been experimented with as they suggest).
  • Write ID3 tags is selected.
  • ID3 Settings... are opened where we set various mp3 meta-data such as the Song Title, Artist, Subtitle, Composer, Comment, Genre, Copyright and URL.