# planet.linuxaudio.org

## September 18, 2017

### GStreamer News

#### GStreamer 1.12.3 stable release

The GStreamer team is pleased to announce the second bugfix release in the stable 1.12 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.12.x.

See /releases/1.12/ for the full release notes.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

## September 17, 2017

### OpenAV

#### 02: Ctlra Virtual Devices

Virtual Ctlra devices? But why do you need or care about that? Read on – this is going to change how you (and the community) work with hardware and software controllers. To state the problem: we all own hardware controllers – MIDI, USB, or something else. Some DAWs support them – allow them to even “map” to different functionality – but it is often difficult and error prone. Whats worse is that if you ask the developer of the DAW for help, they can’t help you because they don’t have access to the hardware… or do they?

### Virtual Ctlras!

So this is where virtual devices come in – and save the day. The Ctlra library allows any fully supported Ctlra device to be “virtualized” or simulated by the developer. If a user has an issue with a particular device, the developer has access to the software version of it! A mock-up created by the Ctlra library, can be used instead of real hardware to test and reproduce the users issue.

### Developers and Musicians?

What else can be done using a virtual ctlra? Well say you are a musician – and you want your hardware controller to map to an audio looper in a specific way. It doesn’t currently work correctly, and you don’t have the time or experience to create the mapping yourself. With the virtual devices any developer can help you, simulating your controller hardware, and implementing the mapping for you. Perhaps you’re happy with their work, so buy them a beverage in return. The hardware accessibility problem solved!

### More More Moarrr!

How about creating a prototype controller using the Ctlra library, testing it for its workflow using a software interface, and later building a physical mockup using an Arduino or RaspberryPi? What if hardware vendors supplied Ctlra drivers with their newly created hardware – the options to utilize and customize how you use their hardware with your favorite software becomes amazing.

Think we’re biting off more than we can chew? Nope – the 84 commits in the last 2 weeks (in the Ctlra repo alone!!) beg to differ: virtual devices are available! Don’t believe we’re going to be able to create UIs on various platforms, and embed them into host applications? Yes we can – checkout the purpose-built AVTKA UI library for creating virtual Ctlra interfaces!

### Signoff and Next Up

We hope you’re as excited as us about this whole concept – OpenAV has been working towards this for a long time – and its great to finally get pushing this code out to the community! So what next? Well we can take an in-depth look at the integration of the hardware and virtual controller – that might showcase some of the awesomeness that will be when real-world audio-software gets Ctlra functionality integrated…

-Harry of OpenAV

## September 16, 2017

### Libre Music Production - Articles, Tutorials and News

#### Ardour 5.12 released

Ardour 5.12 has just been released! The main new features in this release involve session/track template management and improvements to MIDI patch changing, as well as the usual bug fixes.

## September 15, 2017

### ardour

#### Ardour 5.12 released

Ardour 5.12 is now available.

Although when Ardour 5.11 was released, we expected a significant gap until 6.0 will be announced, enough notable features and fixes accumulated that it seemed better for us to push out a 5.12 release before we embark on the major code changes that will mark the real start of the development process for 6.0.

Much of the work in this release was sponsored by Harrison Consoles.

Two of the most notable new features are the improvements in functionality to the new session and new track/bus dialogs, which now offer much easier and more powerful ways to use templates. These include dynamic "track wizard" templates that allow you to interactively setup sessions and/or groups of new track/busses very quickly and very easily. This builds on the new template manager dialog introduced in 5.11, and a new less obvious feature: the ability to create dynamic templates with Lua scripts.

Also notable is the new patch selection dialog for MIDI tracks/instruments, which provides an easy and convenient way to preview patches in software and hardware instruments. Naturally, it integrates fully with Ardour's support for MIDNAM (patch definition files), so you will named programs/patches for both General MIDI synths and those with MIDNAM files.

## September 11, 2017

### OpenAV

#### 01: Ctlra

Hey! With this new site online, we’d better post some actual content! So we are going to post a articles to show what the summer time was spent developing. There’s a range of projects always going on, but usually we focus on a particular topic. Right now that’s the Ctlra project!

### Ctlra

Ctlra is a library to allow software developers interface with hardware devices. Technically, it “abstracts” the details of the hardware device away, and provides the application with “generic events”. Great. But what does it mean to you – the musician on stage? It means any Ctlra enable application (more on that in a future post!) will be easy to control from your hardware control surface. More importantly, not just “input” will work well – its also about feedback – lighting up the controller, displaying useful info on the devices’ integrated screen!

So what is OpenAV actually doing for this? During the last year (since Nov ’16!) we’re writing code, lots of code. Sometimes this code enables your hardware device to actually work on the Linux platform, sometimes it exposes the device in a different way – to allow your audio software easily interact with the device. Checkout the youtube video of the presentation at the LAC (Demo’s start at 23:30!):

### Next UP

In the next posts, OpenAV is going to show you what Proof-of-Concept work we’re doing – to demonstrate the value of the Ctlra library. Right now, you need hardware to test if Ctlra support is working as expected… that’s about to change!

Stay tuned, -Harry from OpenAV

## September 07, 2017

### News – Ubuntu Studio

#### 17.10 Beta 1 Release

Ubuntu Studio 17.10 Artful Aardvark Beta 1 is released! It’s that time of the release cycle again. The first beta of the upcoming release of Ubuntu Studio 17.10 is here and ready for testing. You may find the images at cdimage.ubuntu.com/ubuntustudio/releases/artful/beta-1/. More information can be found in the Beta 1 Release Notes. Reporting Bugs If […]

## September 06, 2017

### digital audio hacks – Hackaday

#### Hackaday Prize Entry: SNAP Is Almost Geordi La Forge’s Visor

Echolocation projects typically rely on inexpensive distance sensors and the human brain to do most of the processing. The team creating SNAP: Augmented Echolocation are using much stronger computational power to translate robotic vision into a 3D soundscape.

The SNAP team starts with an Intel RealSense R200. The first part of the processing happens here because it outputs a depth map which takes the heavy lifting out of robotic vision. From here, an AAEON Up board, packaged with the RealSense, takes the depth map and associates sound with the objects in the field of view.

Binaural sound generation is a feat in itself and works on the principle that our brains process incoming sound from both ears to understand where a sound originates. Our eyes do the same thing. We are bilateral creatures so using two ears or two eyes to understand our environment is already part of the human operating system.

In the video after the break, we see a demonstration where the wearer doesn’t need to move his head to realize what is happening in front of him. Instead of a single distance reading, where the wearer must systematically scan the area, the wearer simply has to be pointed the right way.

Another Assistive Technology entry used the traditional ultrasonic distance sensor instead of robotic vision. There is even a version out there for augmented humans with magnet implants covered in Cyberpunk Yourself called Bottlenose.

Filed under: digital audio hacks, wearable hacks

## September 02, 2017

### OpenAV

#### 00: New OpenAV Website!

Hey Everybody!

The OpenAV website had been quiet for a while – but OpenAV has been as busy as ever! We just haven’t been keeping up with posting to social media – that’s all 🙂 So whats been going on? Good question! Lots of coding, learning and re-working of crucial components of the linux-audio world, in order to enable next gen software. Sounds lame, but building novel software requires well designed building-blocks, and sometimes they’re lacking. Stay tuned for future blog posts where we will talk trough some of the cool stuff we’ve been working on.

Of course we attended the Linux Audio Conference (or just LAC) again this year, which was held in France for the first time. OpenAV presented about the Ctlra project – more info available on the Code – Ctlra page!

Thats all for now, stay tuned for the next update! -OpenAV

### fundamental code

#### Total Variation Denoising

Working with data is an important part of my day-to-day work. No matter if it’s speech, music, images, brain waves, or some other stream of data there’s plenty of it and there’s always some quality issue associated with working with the data. In this post I’m interested in providing an introduction to one technique which can be utilized to reduce the amount of noise present in some of these classes of signals.

Noise might seem abstract at first, but it’s relatively simple to quantify it. If the original signal, $x$, is known, then the noise, $n$, is any deviation in the observation, $y$, from the original signal.

$$y = x + n$$

Typically the deviation is measured via the squared error across all elements in a given signal:

$$\text{error} = ||x-y||^2_2 = \sum_i (x_i-y_i)^2$$

When only the noisy signal, $y$, is observed it is difficult to separate the noise from the signal. There is a wealth of literature on separating noise and many algorithms focus on identifying underlying repeating structures. The algorithm that this post focuses on is one which reduces the total variation over a given signal. One example of a signal with little variation is a step function:

A step function only has one point where a sample of the signal varies from the previous sample. The Total Variation denoising technique focuses on minimizing the number of points where the signal varies and the amount the signal varies at each point. Restricting signal variation works as an effective denoiser as many types of noise (e.g. white noise) contain much more variation than the underlying signal. At a high level Total Variation (TV) denoising works by minimizing the cost of the output $y$ given input signal $x$ as described below:

$$\text{cost} = \text{error}(x, y) + \text{weight}*\text{sparseness}(\text{transform}(y))$$

Mathematically the full cost of TV denoising is:

\begin{aligned} \text{cost} &= \text{error} + \text{TV-cost} \\ \text{cost} &= ||x-y||_2^2 + \lambda ||y||_{TV} \\ ||y||_{TV} &= \sum |y_i-y_{i-1}| \end{aligned}

To see how the above optimization can recover a noisy signal, lets look at a noisy version of the step function:

After using the TV norm to denoise only a few points of variation are left:

The process of getting the final TV denoised output involves many iterations of updating where variations occur. Over the course of iterations opposing variations cancel out and smaller variations are driven to $\Delta y = 0$. As the number of non-zero points increase a sparse solution is produced and noise is eliminated. For higher values of the TV weight, $\lambda$, the solution will be more sparse. For the noisy step function, $y$ and $\Delta y$ over several iterations look like:

For piecewise constant signals, the TV norm alone works quite well, however there are problems which arise with the output when the original signal is not a series of flat steps. To illustrate this consider a piecewise linear signal. When TV denoising is applied a stair stepping effect is created as shown below:

One of the extensions to TV based denoising is to add 'group sparsity' to the cost of variation. Standard TV denoising results in a sparse set of points where there is non-zero variation, resulting in a few piecewise constant regions. With the TV norm, the cost of varying at point $\Delta y_i$ within the signal does not depend upon which other, $\Delta y_j,\Delta y_k,\text{etc}$, points vary. Group Sparse Total Variation, GSTV, on the other hand reduces the cost for smaller variation in nearby points. GSTV therefore generally produces smoother results with more gentle curves for higher order group sparsity values as variation occurs over several nearby points rather than a singular one. Applying GSTV to the previous example results in a much smoother representation which more accurately models the underlying data.

Now that some artificial examples have been investigated, lets take a brief look at some real world data. One example of data which is expected to have relatively few points of abrupt change is the price of goods. In this case we’re looking at the price of corn in the United States 2000 to 2017 in USD per bushel as retrieved from http://www.farmdoc.illinois.edu/manage/uspricehistory/USPrice.asp . With real data it’s harder to define noise (or what part of the signal is unwanted); However, by using higher levels of denoising the overall trends can be observed within the time-series data:

If this short into was interesting I’d recommend trying out TV/GSTV techniques on your own problems. For more in depth information there’s a good few papers out there on the topic with the original GSTV work being:

• I. W. Selesnick and P.-Y. Chen, 'Total Variation Denoising with Overlapping Group Sparsity', IEEE Int. Conf. Acoust., Speech, Signal Processing (ICASSP). May, 2013.

• http://eeweb.poly.edu/iselesni/gstv/ - contains above paper as well as a MATLAB implementation

And if you’re using Julia, feel free to grab my re-implementation of Total Variation and Group Sparse Total Variation at https://github.com/fundamental/TotalVariation.jl

## August 29, 2017

### Pid Eins

#### All Systems Go! 2017 CfP Closes Soon!

The All Systems Go! 2017 Call for Participation is Closing on September 3rd!

Please make sure to get your presentation proprosals forAll Systems Go! 2017 in now! The CfP closes on sunday!

In case you haven't heard about All Systems Go! yet, here's a quick reminder what kind of conference it is, and why you should attend and speak there:

All Systems Go! is an Open Source community conference focused on the projects and technologies at the foundation of modern Linux systems — specifically low-level user-space technologies. Its goal is to provide a friendly and collaborative gathering place for individuals and communities working to push these technologies forward. All Systems Go! 2017 takes place in Berlin, Germany on October 21st+22nd. All Systems Go! is a 2-day event with 2-3 talks happening in parallel. Full presentation slots are 30-45 minutes in length and lightning talk slots are 5-10 minutes.

In particular, we are looking for sessions including, but not limited to, the following topics:

• Low-level container executors and infrastructure
• IoT and embedded OS infrastructure
• OS, container, IoT image delivery and updating
• Building Linux devices and applications
• Low-level desktop technologies
• Networking
• System and service management
• Tracing and performance measuring
• IPC and RPC systems
• Security and Sandboxing

While our focus is definitely more on the user-space side of things, talks about kernel projects are welcome too, as long as they have a clear and direct relevance for user-space.

systemd.conf will not take place this year in lieu of All Systems Go!. All Systems Go! welcomes all projects that contribute to Linux user space, which, of course, includes systemd. Thus, anything you think was appropriate for submission to systemd.conf is also fitting for All Systems Go!

### GStreamer News

#### GStreamer Conference 2017: Registration now open

The GStreamer Conference 2017 will take place on 21-22 October 2017 in Prague (Czech Republic), just before the Embedded Linux Conference Europe.

It is a conference for developers, contributors, decision-makers, students, hobbyists, and anyone else interested in the GStreamer multimedia framework or open source multimedia technologies.

Registration now open

You can now register for the GStreamer Conference 2017 via the conference website.

Early-bird registration for professionals is available until 15th September.

We hope to see you there!

## August 28, 2017

### digital audio hacks – Hackaday

#### Turning On Your Amplifier With A Raspberry Pi

Life is good if you are a couch potato music enthusiast. Bluetooth audio allows the playing of all your music from your smartphone, and apps to control your hi-fi give you complete control over your listening experience.

Not quite so for [Daniel Landau] though. His Cambridge Audio amplifier isn’t quite the latest generation, and he didn’t possess a handy way to turn it on and off without resorting to its infrared remote control. It has a proprietary interface of some kind, but nothing wireless to which he could talk from his mobile device.

His solution is fairly straightforward, which in itself says something about the technology available to us in the hardware world these days. He took a Raspberry Pi with the Home Assistant home automation package and the LIRC infrared subsystem installed, and had it drive an infrared LED within range of the amplifier’s receiver. Coupled with the Home Assistant app, he was then able to turn the amplifier on and off as desired. It’s a fairly simple use of the software in question, but this is the type of project upon which so much more can later be built.

Not so many years ago this comparatively easy project would have required a significant amount more hardware and effort. A few weeks ago [John Baichtal] took a look at the evolution of home automation technology, through the lens of the language surrounding the term itself.

Via Hacker News.

Filed under: digital audio hacks, home hacks

## August 27, 2017

### Libre Music Production - Articles, Tutorials and News

#### LMP Asks #24: An interview with Luciano Dato

This time we talk to Luciano Dato, creator of Noise Repellent, a realtime noise reduction plugin.

Hi Luciano, thank you for taking the time to do this interview. Where do you live, and what do you do for a living?

I live in Santa Fe, Argentina and I work as a sysadmin/technician in a small IT company.

## August 23, 2017

### open-source – CDM Create Digital Music

#### What if you used synthesizers to emulate nature and reality?

Bored with making presets for instruments, one sound designer decides to make presets for ambient reality – and you can learn from the results.

“Scapes” is a multi-year, advanced journey into the idea that the synthesizer could sound like anything you imagine. Once you’ve grabbed this set of Ableton Live projects, you can bliss out to the weirdly natural results. Or you can tear apart the innards, finding everything from tricks on how to make cricket sounds synthetically to a veritable master class in using instruments like Ableton’s built-in FM synthesizer Operator. The results are Creative Commons-licensed (and of course, you can also grab individual presets).

The project is the brainchild of sound designer Francis Preve. Apart from his prolific writing career and Symplesound soundware line, Fran has put his sound design work all over presets for apps, software (including Ableton Live), and hardware.

As a result, no one knows better than Fran how much of the work of making presets focuses on particular, limited needs. And that’s too bad. The thing is, there’s no reason to be restricted to the stuff we normally get in synth presets. (You know the type: “lush, succulent pads” … “crisp leads…” “back-stabbing basslines…” “chocolate-y, creamy nougat horn sections…” “impetuous, slightly condescending 80s police drama keyboard stacks…” or, uh, whatever. Might have made some of those up.)

No, the promise of the synthesizer was supposed to be unlimited sonic possibilities.

If we tend to recreate what we’ve heard, that’s partly because we’re synthesizing something we’ve taken some care in hearing. So, why not go back to the richness and complexity of sound as we hear it in everyday life? Why not combine the active listening of a soundwalk or field recording with the craft of producing something using synthesis, in place of a recording?

Scapes does that, and the results are – striking. There’s not a single sample anywhere in the four ambient environments, which cover a rainy day in the city, a midsummer night, a brook echoing with bird song, and a more fanciful haunted house (with a classic movie origin). Instead, these are multitrack compositions, constructed with a bunch of instances of Operator and some internal effects. Download the Ableton Live project files, and you see a set of MIDI tracks and internal Live devices.

You might not be fooled into thinking the result sounds exactly like a field recording, but you would certainly let it pass for Foley in film. (I think that fits, actually – film uses constructed Foley partly because we expect in that context for the sounds to be constructed, more the way we imagine we hear than what literally passes into our ears.)

You wouldn’t think this was internal Ableton devices – not by a longshot – but of course it is.

And that’s where Scapes is doubly useful. Whether or not you want to create these particular sounds, every layer is a master class in sound design and synthesis. If you can understand a cricket, a bottle rocket, a rainstorm, and a car alarm, then you’re closer not only to emulating reality, but to being able to reconstruct the sounds you hear in your imagination and that you remember from life. That opens up new galaxies of potential to composers and musicians.

It might be just what electronic music needs: to think of sound creatively, rather than trying to regurgitate some instrumentation you’ve heard before. This might be the opposite of how you normally think of presets: here, presets can liberate you from repetitive thought.

I’ve seen this idea before – but just once before, that I can think of. Andy Farnell’s Designing Sound, which began life as a PDF that was floating around in draft form before it matured into a book at MIT Press, took on exactly this idea. Fran’s scapes are “tracks,” collaged compositions that turn into entire environments; Farnell looks only at the component sounds one by one.

Otherwise, the two have the same philosophy: understand the way you hear sound by starting from scratch and building up something that sounds natural. Scapes does it with Ableton Live projects you can easily walk through. Designing Sound demonstrates this on paper with patches in the free and open source environment Pure Data. As Richard Boulanger describes that book, “with hundreds of fully working sound models, this ‘living document’ helps students to learn with both their eyes and their ears, and to explore what they are learning on their own computer.”

But yes – create sounds by really listening, actively. (Pauline Oliveros might have been into this.)

Designing Sound | The MIT Press

Sound examples

A PDF introducing Pure Data (the free software you can use to pull this off)

But grabbing Scapes and a PDF or paper edition of Designing Sound together would give you a pairing you could play with more or less for the rest of your life.

Scapes is free (only Ableton Live required), and available now.

https://www.francispreve.com/scapes/

For background on how this came about: THE ORIGIN OF SCAPES [TL;DR EDIT]

The post What if you used synthesizers to emulate nature and reality? appeared first on CDM Create Digital Music.

## August 22, 2017

### rncbc.org

#### Vee One Suite 0.8.4 - A Late-Summer'17 release

Greetings!

The Vee One Suite of old-school software instruments, respectively synthv1, as a polyphonic subtractive synthesizer, samplv1, a polyphonic sampler synthesizer and drumkv1 as yet another drum-kit sampler, welcomes a brand new and fourth member, padthv1 as a polyphonic additive synthesizer, now joining the late-summer'17 release party.

All available in dual form:

• a pure stand-alone JACK client with JACK-session, NSM (Non Session management) and both JACK MIDI and ALSA MIDI input support;
• a LV2 instrument plug-in.

The Vee One Suite are free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

And now being the gang-of-four!

## synthv1 - an old-school polyphonic synthesizer

synthv1 0.8.4 (late-summer'17) is out!

synthv1 is an old-school all-digital 4-oscillator subtractive polyphonic synthesizer with stereo fx.

LV2 URI: http://synthv1.sourceforge.net/lv2

change-log:

• Disabled "Custom style theme" option on LV2 plug-in form.
• Brand new LFO Balance parameter introduced.

website:
http://synthv1.sourceforge.net

http://sourceforge.net/projects/synthv1/files

git repos:
http://git.code.sf.net/p/synthv1/code
https://github.com/rncbc/synthv1.git
https://gitlab.com/rncbc/synthv1.git
https://bitbucket.org/rncbc/synthv1.git

## samplv1 - an old-school polyphonic sampler

samplv1 0.8.4 (late-summer'17) is out!

samplv1 is an old-school polyphonic sampler synthesizer with stereo fx.

LV2 URI: http://samplv1.sourceforge.net/lv2

change-log:

• Disabled "Custom style theme" option on LV2 plug-in form.

website:
http://samplv1.sourceforge.net

http://sourceforge.net/projects/samplv1/files

git repos:
http://git.code.sf.net/p/samplv1/code
https://github.com/rncbc/samplv1.git
https://gitlab.com/rncbc/samplv1.git
https://bitbucket.org/rncbc/samplv1.git

## drumkv1 - an old-school drum-kit sampler

drumkv1 0.8.4 (late-summer'17) is out!

drumkv1 is an old-school drum-kit sampler synthesizer with stereo fx.

LV2 URI: http://drumkv1.sourceforge.net/lv2

change-log:

• Disabled "Custom style theme" option on LV2 plug-in form.

website:
http://drumkv1.sourceforge.net

http://sourceforge.net/projects/drumkv1/files

git repos:
http://git.code.sf.net/p/drumkv1/code
https://github.com/rncbc/drumkv1.git
https://gitlab.com/rncbc/drumkv1.git
https://bitbucket.org/rncbc/drumkv1.git

padthv1 0.8.4 (late-summer'17) is out! (NEW!)

padthv1 is based on the PADsynth algorithm by Paul Nasca, as a special variant of additive synthesis.

change-log:

• First public release.

website:

git repos:

Enjoy && have fun ;)

### digital audio hacks – Hackaday

Have a beautiful antique radio that’s beyond repair? This ESP8266 based Internet radio by [Edzelf] would be an excellent starting point to get it running again, as an alternative to a Raspberry-Pi based design. The basic premise is straightforward: an ESP8266 handles the connection to an Internet radio station of your choice, and a VS1053 codec module decodes the stream to produce an audio signal (which will require some form of amplification afterwards).

Besides the excellent documentation (PDF warning), where this firmware really shines is the sheer number of features that have been added. It includes a web interface that allows you to select an arbitrary station as well as cycle through presets, adjust volume, bass, and treble.

If you prefer physical controls, it supports buttons and dials. If you’re in the mood for something more Internet of Things, it can be controlled by the MQTT protocol as well. It even supports a color TFT screen by default, although this reduces the number of pins that can be used for button input.

The firmware also supports playing arbitrary .mp3 files hosted on a server. Given the low parts count and the wealth of options for controlling the device, we could see this device making its way into doorbells, practical jokes, and small museum exhibits.

To see it in action, check out the video below:

[Thanks JeeCee]

Filed under: digital audio hacks, radio hacks

## August 19, 2017

### open-source – CDM Create Digital Music

#### Here are some of our favorite MeeBlip triode synth jams

We say “play” music for a reason – synths are meant to be fun. So here are our favorite live jams from the MeeBlip community, with our triode synth.

And, of course, whether you’re a beginner or more advanced, this can give you some inspiration for how to set up a live rig – or give you some idea of what triode sounds like if you don’t know already. We picked just a few of our favorites, but if we missed you, let us know! (audio or video welcome!)

First, Olivier Ozoux has churned out some amazing jam sessions with the triode, from unboxing to studio. (He also disassembled our fully-assembled unit to show the innards.)

The amazing Gustavo Bravetti is always full of virtuosity playing live; here, that distinctive triode sound cuts through a table full of gear. Details:

Again ARTURIA’s Beat Step Pro in charge of randomness (accessory percussions and subtle TB303). Practically all sounds generated on the black boxes, thanks Elektron, and at last but no least MeeBlip’s [triode] as supporting melody synth. Advanced controls from Push and Launch Control using Performer , made with Max by Cycling ’74.

Here’s a triode with the Elektron Octatrack as sequencer, plus a Moog Minitaur and Elektron Analog RYTM. That user also walks through the wavetable sounds packed into the triode for extra sonic variety.

Novation’s Circuit and MeeBlip triode pair for an incredible, low power, low cost, ultra-portable, all-in-one rig. We get not one but two examples of that combo, thanks to Pete Mitchell Music and Ken Shorley. It’s like peanut butter and chocolate:

One nice thing about triode is, that sub oscillator can fatten up and round out the one oscillator of a 303. We teamed up with Roland’s Nick de Friez when the lovely little TB-03 came out to show how these two can work together. Just output the distinctive 303-style sequencer to triode’s MIDI in, and have some fun:

Here’s triode as the heart of a rig with KORG’s volca series (percussion) and Roland’s TB-03 (acid bass) – adding some extra bottom. Thank you, Steven Archer, for your hopeful machines:

Get yours:
http://meeblip.com

The post Here are some of our favorite MeeBlip triode synth jams appeared first on CDM Create Digital Music.

## August 16, 2017

### ardour

#### Ardour 5.11 released

We are pleased to announce the availability of Ardour 5.11. Like 5.10, this is primarily a bug-fix release, though it also includes VCA automation graphical editing, a new template management dialog and various other useful new features.

Read more below for the full list of features, improvements and fixes.

### digital audio hacks – Hackaday

#### The Best Stereo Valve Amp In The World

There are few greater follies in the world of electronics than that of an electronic engineering student who has just discovered the world of hi-fi audio. I was once that electronic engineering student and here follows a tale of one of my follies. One that incidentally taught me a lot about my craft, and I am thankful to say at least did not cost me much money.

Construction more suited to 1962 than 1992.

It must have been some time in the winter of 1991/92, and being immersed in student radio and sound-and-light I was party to an intense hi-fi arms race among the similarly afflicted. Some of my friends had rich parents or jobs on the side and could thus afford shiny amplifiers and the like, but I had neither of those and an elderly Mini to support. My only option therefore was to get creative and build my own. And since the ultimate object of audio desire a quarter century ago was a valve (tube) amp, that was what I decided to tackle.

Pulling the amplifier out of storage in 2017, I’m going in blind. I remember roughly what I did, but the details have been obscured by decades of other concerns. So in an odd meeting with my barely-adult self, it’s time to take a look at what I made. Where did I get it right, and just how badly did I get it wrong?

Lovingly hand-drawn from life, missing the PSU components.

The amp itself sits in the removable portion of the Dymar chassis, I can’t remember what the dead instrument was, but Dymar produced a range of instruments as modules for a backplane. The front panel is a piece of sheet steel I cut myself, and is still painted in British Leyland Champagne Beige, the colour of that elderly Mini. It has a volume control, a DIN input socket which must have seemed cool to only me in 1992, and a Post Office Telephones terminal block for the speakers. Inside the chassis the amp is mounted on a piece of aluminium sheet, on top a pair of PCL86 triode/pentode valves, a pair of output transformers and a supply smoothing capacitor, and underneath all the smaller components on tag strips. Though I say it myself, it’s a tidier job than I remember.

1969’s hot new device, already obsolete by 1980.

The circuit is simple enough, a single-ended Class A audio amplifier that I lifted along with the PCL86 and the original output transformers, from a commonly available (at the time) scrap ITT TV set. These triode/pentodes were the integrated amplifier device of their day, as ubiquitous as an LM386 in later decades, containing a triode as preamplifier and a power output pentode, and capable of delivering a few watts of audio at reasonable quality with very few external components. They were also dirt cheap, the “P” signifying a 300mA series heater chain as used in TV sets that was considerably less desirable than the “E” versions which had the standard 6.3V heaters. Not a problem for me, as the Dymar PSU had a 12V rail that could happily give almost the 300mA each to a couple of PCL86s.

My choice of parts must have been limited to those my university’s RS trade counter had in stock that had the required working voltage, and are a mixed bag that you wouldn’t remotely class as audio grade. There are a couple of enormous 450V 33μF electrolytics, and 250VAC Class Y 0.1μF polymer capacitors intended for use in power supply filters. I seem to have followed the idea of using a small and a large capacitor in parallel, probably for some youthful hi-fi mumbo-jumbo idea about frequency response. Otherwise the resistors look like carbon film components, something that probably made more sense to me in the early 1990s than it does now.

On top of the chassis, the original transformers taken from scrap TV sets turned out to be of such low quality that they tended to “sing” at any kind of volume, so I shelled out on a pair of the only valve audio output transformers I could find at the time, something that must have been a relic of a bygone era in the RS catalogue. The original valves were a pair of PCL86s from old TVs, but I replaced them with a “matched” pair of brand new PCL86s. I remember these cost me 50p (about 90¢ in ’92) each at a radio rally, and were made in Yugoslavia with a date code of January 1980. The new valves didn’t make any difference, but they made me feel better.

### How did this amplifier perform, and what did I learn from it?

Under the hood, and it’s all a bit messy.

In the first instance, it performed 110%, because I had a valve amp and nobody else did. The air of mystique surrounding this rarest of audio devices neatly sidestepped the fact that it wasn’t the best of valve amps, but that didn’t matter. Being a class A amplifier with new components, it came to the party with the lowest theoretical distortion it could have had due to its circuit topology. Another area of shameless bragging rights for my younger self, but in reality all it meant was that it got hot.

The sound at first power-on was crisp and sibilant, but with an obvious frequency response problem, it was bass-to-mid heavy, and not in a good way. Here was my first learning opportunity, I had just received an object lesson in real audio transformers not behaving like theoretical audio transformers. It had an impressive impulse response though, square waves came through it beautifully square on my battered old ‘scope.

I could only go so far listening to a hi-fi that might have been a little fi but certainly wasn’t hi. My attention turned to that frequency response problem, and since we’d just been through the series of lectures that dealt with negative feedback I considered myself an expert in such matters who could fix it with ease. I cured the frequency response hump with a feedback resistor from output to input, playing around with values until I lit upon 330K as about right.

### The Best Stereo Valve Amp In The World. Yeah, right.

Here was my second learning experience. I’d made a pretty reasonable amplifier as it happens, and it sounded rather good through my junk-shop Wharfedale Linton speakers with cheap Maplin bass drivers. I could indulge my then-held taste in tedious rock music, and pretend that I’d reached a state of hi-fi Higher Being. But of course, I hadn’t. I’d got my flat frequency response, but I’d shot my phase response to hell, and thus my impulse response had all the timing of a British Rail local stopping service. The ‘scope showed square waves would eventually get there, but oh boy did they take their time. The sound had an indefinable wooliness to it, it was clear as a bell but the sibilance had gone. I came away knowing more about the complex and unexpected effects of audio circuitry than I ever expected to, and with an amp that still had some bragging  rights, but not as the audio genius I had hoped I might be.

The amplifier saw me through my days as a student, and into my first couple of years in the wider world. Eventually the capacitor failed in the Dymar PSU, and I bought a Cambridge Audio amp that has served me ever since. The valve amp has sat forlornly on the shelf, a reminder of a past glory that maybe one day I’ll resuscitate. Perhaps I’ll give it a DSP board programmed to cure its faults. Fortunately I have other projects from my student days that have better stood the test of time.

So. There’s my youthful folly, and what I learned from it. How about you, are there any projects from your past that seemed a much better idea at the time than they do now?

Filed under: classic hacks, digital audio hacks, Hackaday Columns, Interest, Original Art

## August 12, 2017

### Libre Music Production - Articles, Tutorials and News

#### FLOSS music convention in Germany in November

On the 4th and 5th of November 2017 you can attend the Sonoj Convention in Cologne, Germany. Admission is free. You will be able to enjoy demonstrations, talks and workshops about music production through open source software. Hands-on tutorials and workflow presentations can be expected. The Sonoj Convention is a great opportunity to meet like-minded people, maybe even to have engaging discussions! Every man and woman is welcome, no matter your musical or technological background.

## August 11, 2017

### digital audio hacks – Hackaday

#### We Should Stop Here, It’s Bat Country!

[Roland Meertens] has a bat detector, or rather, he has a device that can record ultrasound – the type of sound that bats use to echolocate. What he wants is a bat detector. When he discovered bats living behind his house, he set to work creating a program that would use his recorder to detect when bats were around.

[Roland]’s workflow consists of breaking up a recording from his backyard into one second clips, loading them in to a Python program and running some machine learning code to determine whether the clip is a recording of a bat or not and using this to determine the number of bats flying around. He uses several Python libraries to do this including Tensorflow and LibROSA.

The Python code breaks each one second clip into twenty-two parts. For each part, he determines the max, min, mean, standard deviation, and max-min of the sample – if multiple parts of the signal have certain features (such as a high standard deviation), then the software has detected a bat call. Armed with this, [Roland] turned his head to the machine learning so that he could offload the work of detecting the bats. Again, he turned to Python and the Keras library.

With a 95% success rate, [Roland] now has a bat detector! One that works pretty well, too. For more on detecting bats and machine learning, check out the bat detector in this list of ultrasonic projects and check out this IDE for working with Tensorflow and machine learning.

Filed under: digital audio hacks

## August 09, 2017

### Pid Eins

#### All Systems Go! 2017 Speakers

The All Systems Go! 2017 Headline Speakers Announced!

Don't forget to send in your submissions to the All Systems Go! 2017 CfP! Proposals are accepted until September 3rd!

A couple of headline speakers have been announced now:

• Alban Crequy (Kinvolk)
• Brian "Redbeard" Harrington (CoreOS)
• Gianluca Borello (Sysdig)
• Jon Boulle (NStack/CoreOS)
• Martin Pitt (Debian)
• Thomas Graf (covalent.io/Cilium)
• Vincent Batts (Red Hat/OCI)
• (and yours truly)

These folks will also review your submissions as part of the papers committee!

All Systems Go! is an Open Source community conference focused on the projects and technologies at the foundation of modern Linux systems — specifically low-level user-space technologies. Its goal is to provide a friendly and collaborative gathering place for individuals and communities working to push these technologies forward.

All Systems Go! 2017 takes place in Berlin, Germany on October 21st+22nd.

## August 03, 2017

### open-source – CDM Create Digital Music

#### Export to hardware, virtual pedals – this could be the future of effects

If your computer and a stompbox had a love child, MOD Duo would be it – a virtual effects environment that can load anything. And now, it does Max/MSP, too.

MOD Devices’ MOD Duo began its life as a Kickstarter campaign. The idea – turn computer software into a robust piece of hardware – wasn’t itself so new. Past dedicated audio computer efforts have come and gone. But it is genuinely possible in this industry to succeed where others have failed, by getting your timing right, and executing better. And the MOD Duo is starting to look like it does just that.

What the MOD Duo gives you is essentially a virtualized pedalboard where you can add effects at will. Set up the effects you want on your computer screen (in a Web browser), and even add new ones by shopping for sounds in a store. But then, get the reliability and physical form factor of hardware, by uploading them to the MOD Duo hardware. You can add additional footswitches and pedals if you want additional control.

Watch how that works:

For end users, it can stop there. But DIYers can go deeper with this as an open box. Under the hood, it’s running LV2 plug-ins, an open, Linux-centered plug-in format. If you’re a developer, you can create your own effects. If you like tinkering with hardware, you can build your own controllers, using an Arduino shield they made especially for the job.

And then, this week, the folks at Cycling ’74 take us on a special tour of integration with Max/MSP. It represents something many software patchers have dreamed of for a long time. In short, you can “export” your patches to the hardware, and run them standalone without your computer.

This says a lot about the future, beyond just the MOD Duo. The technology that allows Max/MSP to support the MOD Duo is gen~ code, a more platform-agnostic, portable core inside Max. This hints at a future when Max runs in all sorts of places – not just mobile, but other hardware, too. And that future was of interest both to Cycling ’74 and the CEO of Ableton, as revealed in our interview with the two of them.

Even broader than that, though, this could be a way of looking at what electronic music looks like after the computer. A lot of people assume that ditching laptops means going backwards. And sure enough, there has been a renewed interest in instruments and interfaces that recall tech from the 70s and 80s. That’s great, but – it doesn’t have to stop there.

The truth is, form factors and physical interactions that worked well on dedicated hardware may start to have more of the openness, flexibility, intelligence, and broad sonic canvas that computers did. It means, basically, it’s not that you’re ditching your computer for a modular, a stompbox, or a keyboard. It’s that those things start to act more like your computer.

Anyway, why wait for that to happen? Here’s one way it can happen now.

Darwin Grosse has a great walk-through of the MOD Duo and how it works, followed by how to get started with

The MOD Duo Ecosystem (an introduction to the MOD Duo)

Content You Need: The MOD Duo Package (into how to work with Max)

An alternative: the very affordable OWL Pedal is similar in function, minus that slick browser interface. It can load Max gen~ code, too:

https://hoxtonowl.com/

New Tutorials including Max MSP on the OWL!

Pd users, that works, too – via Heavy (I think on the MOD, as well):

OWL & Heavy – a Pd patch on the OWL

The post Export to hardware, virtual pedals – this could be the future of effects appeared first on CDM Create Digital Music.

## August 02, 2017

### Libre Music Production - Articles, Tutorials and News

#### MOD Duo and Max/MSP integration

Max/MSP users can now easily convert their Gen objects into LV2 plugins, add them to the roster of MOD Duo plugins and bring them to the stage!

## August 01, 2017

### MOD Devices Blog

#### NEW! MOD Duo and Max/MSP integration!

Max/MSP users can now easily convert their gen~ objects into LV2 plugins, add them to the roster of MOD Duo plugins and bring them to the stage!

## More power to performing digital musicians

There’s no shortage of signal processing environments available to musicians who want to manipulate digital audio. Their use has spread to homes, studios and even stages everywhere. We’ve all seen this revolution take place, with computers popping up at concerts and the advent of laptop music performance. But a computer is not an instrument and a musician shouldn’t become a mere button pusher or mouse handler.

That’s where we come in. The MOD Duo is a computing platform for performing musicians, it’s a computer in a box, optimised to process audio during live performances. And since our creative platform is based on an open format, it can be useful to scores of artists and developers.

The Max/MSP software is one of the greatest and most powerful tools in this field and it has become one of the most used visual programming languages for music and multimedia since its inception in the 1980s. For months now, we’ve been collaborating with Cycling’74, the developers and maintainers of Max/MSP, in order to provide a new stage experience for their users and encourage developers to port their patches and objects into the MOD Duo plugin store.

We’ve come up with a Max package that takes the code exported from Max/MSP gen~ objects and takes care of compiling an LV2 plugin and putting it into the Duo. The whole idea is to simplify the process of having Max/MSP patches and turning them into plugins that can be used on stage without the burden of the computer and with the added controllability provided by the Duo.

## “Wait a minute… I’m confused. What is Gen?”

If you’re not familiar with Max or have never heard of Gen, here’s an overview, courtesy of our friends over at Cycling’74:

“Gen is a new approach to the relationship between patchers and code. The patcher is the traditional Max environment – a graphical interface for linking bits of functionality together. With embedded scripting such as the js object, text-based coding became an important part of working with Max as it was no longer confined to simply writing Max externals in C. Scripting however still didn’t alter the logic of the Max patcher in any fundamental way because the boundary between patcher and code was still the object box. Gen represents a fundamental change in that relationship. The Gen patcher is a new kind of Max patcher where Gen technology is accessed.”

If you are an aficionado and were just waiting for this kind of solution to appear, we’ve come up with documentation to make the process of getting your Gen-based plugins to the MOD Duo as effortless as possible, with a wiki entry and a tutorial that shows you how to create your own plugins.

You can export you gen~ code straight from Max, with the new MODDuo Package

## Why is it cool to have this integration?

This is no small feat.

We’re significantly speeding up the learning curve for adding personalized plugins to the Duo and also allowing digital musicians to take their Max/MSP objects to the stage without a computer. These new plugins will be fully compatible with the 200+ ones that are already available, allowing the creation of elaborate audio chains.

Right now, after being added to the users’ machines, these new plugins can be posted to the forum and we will publish them manually on the plugin store (we’re working on automating this process). Soon, when our commercial plugin store is setup and ready to go, Max/MSP wizards (and all of the MOD community) will be able to provide their creations for a fee, creating a new business in the process, but also promoting the development of more sophisticated audio apps by programmers. Until the commercial store arrives, demo version of these plugins can be published anyway.

In the future, we’ll keep adding new integrations and documentation for other languages and protocols such as Pure Data, Faust and OSC. Creating plugins for the Duo will be within everyone’s reach.

## What are the current plugins that come from Max/MSP gen~ objects?

It all started a while ago, with the official gen plugin export project that Cycling’74 created for building audio applications and plugins. Our software developer, the legendary falkTX, then started doing an implementation of that focused on LV2 and Linux, which he inserted in his own open-source project that provides Cross-Platform Audio Plugins called DISTRHO.

At that time, he and our intern Nino de Wit began to run some tests and develop plugins from gen~code. From this effort, the initial project was born. Shortly afterwards, Nino began developing his own, more complex plugins. As Cycling’74 became aware of this, they contacted us and we decided to make a seamless integration between both platforms.

Here are the plugins derived from Max/MSP gen~ objects, conceived during Nino’s internship at MOD HQ in Berlin. These little gems have been making many MOD users happy since they came around. Here’s a glimpse at the type of plugin this integration will enable users to create:

### Shiroverb

Shiroverb is a shimmer-reverb based on the “Gigaverb”-genpatch, ported from the implementation by Juhana Sadeharju, and the “Pitch-Shift”-genpatch, both in Max MSP.

### Modulay

Modulay is an analog-style delay with variable types of modulation based on the setting of the morph control. All the way counterclockwise is chorus, 12 o’clock is vibrato, and all the way clockwise is flanger. With every setting in between morphing from one effect to the other.

### Larynx

Larynx is a simple sine-modulated vibrato with a tone control.

### Harmless

Harmless is a wave-shapeable harmonic tremolo with a stereo phase control.

## Pedalboards section!

You can check out and listen to the Shiro plugins in action in these sweet pedalboards that our community has created and shared (and load them into your Duo at the click of a button):

### Swell Boost:

Everything a multi-layered guitarist needs in their arsenal: a succulent and smooth shimmer swell pad on one path, and a shrieking shrill boost on another path that cuts through the mix like a Japanese ginsu knife! The best part is that there’s a 4-way toggle switch at the start of the chain that allows the source signal to constantly flow, but the 1st and 4th switches toggle either the pad or the boost (or BOTH if you want the NUCLEAR option!). This allows the guitarist to cut the pad or boost, but results in the pad’s trails to remain in the mix as the source signal never changes.

### ShimmerMachine:

Using the Harmless plugin combined with the Larynx on a Novation Circuit.

### Harmless JCM:

Guitarix JCM-800 and the Shiro Harmless modulator. Such a beautiful sound! Add a little looper and you’re good to go

### Soap Bubbles:

Psychedelic sound based on Larynx, Chorus and some Panning.

Enjoy the Modulay in a simple guitar setup.

Huge pad sound with a parallel path for melody, played on a bass.

### KalimbaJammSessionMOD

Pedalboard used during the Startup Garden at Wallifornia Music Tech. We wanted to show visitors that you can also use the MOD Duo with acoustic instruments and created this nice pad using a synth, a sequencer and a kalimba with a pickup for some solo play. Listen to that tremolo!

### Makeshift Pitchshift:

Using the ‘shimmer’ in the Shiroverb as a pitch-shifter on the bass.

We want to know if you are as thrilled as we are with this integration. Do you look forward to creating your own plugins from Max/MSP? Are you excited about the commercial store? Share your thoughts in the comments below!

## PS: Special Offer

If you buy a MOD Duo before September 30th 2017, you get Max7 for 9 months COMPLETELY FREE.

If you are already a MOD user, you can get Max7 for 9 months for free as well by completing the Great Book of Pedalboards form.

## July 31, 2017

### open-source – CDM Create Digital Music

#### The Viktor NV-1 is a powerful synth running in your browser

Its name is Viktor, and it’s a synth you can play with for free in a browser – with a mouse, or finger, or keyboard, or even MIDI.

Not news, but – heck of a lot of fun to play with.

Now part of a growing number of Web Audio (and even Web MIDI) synths, the Viktor NV-1 is a surprisingly powerful diversion. You get three oscillators, two envelopes (one for amplitude, one for filter), a filter, LFO, reverb, delay, compressor, and loads of controls.

Because it lives in a browser, it’s also easy to save and share presets with others. So, for instance, here you go:

https://goo.gl/ugqbkT

The developer also has a lovely explanation of how this works:

It’s Built on-top of the Web Audio API (WAA). The WAA is very nicely organized and easy to use. Basically it provides a variety of NodeTypes (responsible for sound generation, editing or analysis) which you combine in your liking, creating a graph through which your sound is being shaped.

Also worth noting – how it was built:

Web Audio API, Web MIDI API, Local Storage (through npm module “store”). For the effects section I used Tuna.js.

AngularJS, webaudio-controls (I am regretting this decision, since these controls are full of bugs and had to fix several of them before releasing), Bootstrap, Font Awesome, Font Orbitron and Stylus is what I used for the UI.

Instead of using Angular alone, for dependency management, I use Browserify, which provides the nice CommonJS format/style of module creation and requiring.

Angular isn’t very Browserify-friendly so I had to do some stitching in my initial setup (browserify-shim, browserify-ng-html2js etc.) but once the setup was ready development really felt a breeze.

Grunt and multiple grunt-contrib-‘s are used for the build (and development rebuild).

I drew the images on Pixelmator.

Try it:

http://nicroto.github.io/viktor/

Or grab the code (fully open source):

https://github.com/nicroto/viktor

The browser synth is the work of Nikolay Tsenkov.

The post The Viktor NV-1 is a powerful synth running in your browser appeared first on CDM Create Digital Music.

### blog4

#### new exclusive Notstandskomitee track released

The new Notstandskomitee track Ungetuem can be found exclusive on this compilation by Silent Method Records, currently as download but soon also on vinyl and cassette.
https://silentmethodrecords.bandcamp.com/album/z-e-n-va-compilation

## July 29, 2017

### digital audio hacks – Hackaday

#### Bessel Filter Design

Once you fall deep enough into the rabbit hole of any project, specific information starts getting harder and harder to find. At some point, trusting experts becomes necessary, even if that information is hard to find, obtuse, or incomplete. [turingbirds] was having this problem with Bessel filters, namely that all of the information about them was scattered around the web and in textbooks. For anyone else who is having trouble with these particular filters, or simply wants to learn more about them, [turingbirds] has put together a guide with all of the information he has about them.

For those who don’t design audio circuits full-time, a Bessel filter is a linear, passive bandpass filter that preserves waveshapes of signals that are within the range of the filter’s pass bands, rather than distorting them in some way. [turingbirds]’s guide goes into the foundations of where the filter coefficients come from, instead of blindly using lookup tables like he had been doing.

For anyone else who uses these filters often, this design guide looks to be a helpful tool. Of course, if you’re new to the world of electronic filters there’s no reason to be afraid of them. You can even get started with everyone’s favorite: an Arduino.

Filed under: digital audio hacks

## July 26, 2017

### MOD Devices Blog

#### Top 5 Greatest Things About Our Time at Wallifornia Music Tech

We were at Wallifornia Music Tech during Les Ardentes festival in Liège and it was a memorable week. Here’s a short account of our adventures.

Greetings MOD Community,

There’s a lot going on and the next weeks will be full of unveilings, but we had to take some time to share with you some of the brilliant moments we had earlier this month at Wallifornia Music Tech, during the Les Ardentes music festival in Liège, Belgium.

These are the 5 greatest things that happened during the Startup Acceleration Program, the Wallifornia Music Tech hackathon and the Startup Program, and some of the concerts we attended.

### 5 – Spending Time in the Lovely Liège

I had been to Liège once, a couple of years ago, and spent the whole time at the university for a conference. The weather was not good and I didn’t get to see much of the city. This time, however, the weather was surprisingly warm and we went out to see some of the sites and enjoy what the town has to offer. We stayed at a quaint little place at Rue Pierreuse, in an artsy neighbourhood on top of a hill.

In general, Belgian people were just incredibly friendly and thoughtful, making sure that we had everything we needed at all times and always proud to show us the hidden gems in their city. In this sense, a special acknowledgement must go to the team from Leansquare – Alice, Clémentine, Gérôme, Roald and Ben, in particular – who were responsible for the excellent organisation of the Startup Acceleration Program. They have an amazing co-working space in the heart of the city and took care of every little detail like a well-tuned machine and with a constant smile.

Everyone is happy in Belgium.

Also, Les Ardentes music festival in itself was a spectacular event, in a wonderful location by the river, with an awesome lineup mixing nostalgic headliners, up-and-coming favourites and fresh new acts (more on that later!). The logistics and infrastructure were super well handled for such a big festivity and we managed to enjoy some nice concerts along the way.

### 4 – Seeing Some Sweet Hackathon Action

We were partners and sponsors of the hackathon during the Wallifornia Music Tech Living Lab and provided some MOD Duos and our API for the hackers to use in their projects. The hackathon was masterfully organised and conducted by Luann Williams and Travis Laurendine, who are, among other things, the people responsible for the SXSW hackathon.

They did a great job motivating the teams and guaranteeing a smooth sail for the tens of hard-at-work and exhausted hackers.

Travis and Luann counting the jury’s votes for best hack.

During this hackathon, we met two amazing lords of bits, bytes and bobs, Tom Brückner and Jean-Michel Dewez, who decided to include a little bit of MOD in their hacks. Tom made a web app that provided information on a given song based on Musimap‘s artificial intelligence API. He used data from our pedalboard feed API in order to propose the corresponding pedalboard and ended up as second runner-up.

Jean-Michel, aka Chantal Goret, an 8-bit virtuoso, wanted to use the Duo with Beatmotor, his crafted MIDI controller and instrument. It was built using an old cigar box, an Arduino board, some knobs, buttons and an ultrasound sensor. He used a teensy board to send MIDI notes to the Duo. For this superb retro hack, he won first prize!

Chatting about 8-bit music hacks and the Duo with Jean-Michel, winner of the hackathon, with his cigar-case 8-bit MIDI instrument/controller.

### 3 – Sharing an Intense Week With Eight Fantastic Startups

We spent the whole week with an outstanding group of startuppers from all over the world. There was so much creativity flowing in these intensive training sessions that we all came out fueled with ideas and benefitted from our shared experiences.

I’ll try to summarise all their projects because you should definitely keep an eye out for these gals and guys:

• Beatlinks: It’s a whole living Musiverse in a game that teaches DJ skills to kids  and an animation.
• Big Boy Systems: The first recording system that unites binaural sound and a 3D camera in order to create the ultimate immersive experience.
• Paperchain: They provide data services for the music industry, from the collection and organisation of rights information to the identification of unclaimed royalties.
• Roadie: An app that uses an AI to help bands with tour schedules, based on data from streaming services and social media.
• Sofasession: They have developed an app for online music collaboration and another that connects music students with music schools.
• Soundbops: A toy that teaches the fundamentals of music theory to young children. Their Kickstarter is coming out soon – stay tuned!
• Warm: A huge real-time radio monitor that allows musicians to find out where their songs are being played.
• WIP Music: The so-called Tinder for Music. An app that connects musicians to their audience and the venues that can host them.

### 2 – Meeting Trombone Shorty and His Band Backstage

Thanks to our new friend Travis Laurendine, aka Roi Lion d’Orléans, aka Ideas Gardener, we went backstage to meet Trombone Shorty and his band after their concert.

First, a brief word about their performance. It’s been awhile since I’ve seen such energy on stage and there were several mind-blowing moments when I sort of lost it. The whole band is an example of groove, joy and technique.

We met them all: guitarist Pete Murano, drummer Joey Peebles, bass player Mike Bass-Bailey, tenor sax BK Jackson, baritone sax Dan Oestreicher and the man himself, Troy “Trombone Shorty” Andrews. Travis had sent them a video of the previous day at the hackathon with a short demo of the device and they had gone looking on the website.

Suffice to say, they wanted a Duo. Dan even knew about the MOD Duo from before and is now preparing some demos for us. He plays baritone sax but also has a one-man band so we’re very excited.

Magical moment: Dan Oestreicher (baritone sax, centre left) and Mike Bass-Bailey (bass. centre right) from Trombone Shorty’s band after the concert with their new companion Duo and footswitch extension.

### 1 – Winning the Startup Acceleration Program

We spent the whole week learning and gathering input from a tremendous team of coaches and experts. We were expected to hone our pitches and enthral a jury of investors and affluent music business advisors.

We all worked very hard to perfect our presentations and find a way to squeeze every last bit of information in under 7-minutes. Gianfranco was selected as the first speaker and gave it all.

You can see his pitch for the MOD Duo below:

He was asked questions from industry experts such as Rishi Patel, Virgine Berger and Ted Cohen, and later sat down to meet with them and other investors.

In the end, we were honoured to take home the title of best startup of the Accelerate & Invest program, which coronated a great week and we hope is a presage of even greater things to come.

Receiving a sizable check from Armonia CEO Virginie Berger and music industry legend Ted Cohen

Honourable mentions: the Ramen at the restaurant next to Leansquare and the Boulets avec Frites, the jam sessions we held with our Dutch acolytes Pjotr Lasschuit and Jesse Verhage at our booth during the Startup Garden using kalimbas, Novation Circuits, synths and a wide assortment of controllers, meeting Belgian geniuses Hermutt Lobby, La Femme’s retro-punk concert…

## July 25, 2017

### digital audio hacks – Hackaday

#### Designing the Atom Smasher Guitar Pedal

[Alex Lynham] has been creating digital guitar pedals for a while and after releasing the Atom Smasher, a glitchy lo-fi digital delay pedal, he had people start asking him how he designed digital effects pedals rather than analog effects. In fact, he had enough interest, that he wrote an article on it.

The article starts with some background on [Alex], the pedals he’s built and why he chose not to work on pedals full-time. Eventually, the article gets to the how [Alex] designed the Atom Smasher. He starts by describing the chip he used, the same one that many hobbyists, as well as commercial builders, use for delay based effects – the SpinSemi FV-1.

The FV-1 is a SMD chip used for digital delays and other effects that require a delay line – reverbs, choruses, flangers, etc. It’s programmed with an assembly-style language called SpinASM. [Alex] goes over some of the tools and references he used when designing for the pedal. He also has a list of tips for would-be effect pedal designers which work whether you’re designing digital or analogue effects.

[Alex] ends his article saying that, in the future, he might make the schematic and code available, but for the moment he’s not. The FV-1 is an interesting chip, and [Alex]’s article gives a nice high-level look at its features and how to develop for it. For some interesting guitar pedal related articles, check out this one using effects pedals to get better audio in your car, and here’s one about playing with DSP and designing a pedal with it.

Filed under: digital audio hacks, musical hacks

## July 24, 2017

### Libre Music Production - Articles, Tutorials and News

#### LMP Asks #23: An interview with Jacek from ZARAZA

This time we talk to Jacek from ZARAZA, one of the two members of this experimental/industrial doom/death/sludge metal band.

## Where do you live and what do you do for a living?

I (Jacek) currently live in Ecuador, after immigrating here from Canada about 1.5 years ago. Originally I am Polish, immigrated to Canada in 1990 when I was 20.

## July 21, 2017

### Linux – CDM Create Digital Music

#### Aphex Twin gave us a peek inside a 90s classic. Here’s what we learned.

Aphex Twin’s “Vordhosbn” just got a surprising video reveal, showing how the track was made. So let’s revisit trackers and 90s underground music culture.

You’re probably familiar with the term “white label,” but where did that term originate?”? Back in the early days of DJing, DJs were very territorial about their crate digging. Sometimes, in order to avoid rival DJs looking at their decks to ID their selections (this is way before the days of Shazam, remember), DJs would rip off the labels of a particularly rare record, leaving the white label residue with no identifying information.

Similarly, the 90s were an interesting time for music production. With the advent of computer sequencers, music became more complex – and in the wild west days before YouTube tutorials, concert phone vids, and everyone using Ableton Live, there was legitimate mystery behind how some of the most complex electronic music was made. Max? SuperCollider? Some homebrew software unavailable to the plebs?

If mystery in electronic music production was a game in the 90s, then Richard D. James was its undisputed winner. As Aphex Twin and a host of other pseudonyms, he created mind-bending sequences. As an interview subject, he was equal parts prankster and cagey. Sure, there was an idea of what the IDM greats were up to – Autechre and Plaid used Max, Squarepusher used Reaktor, Aphex used…something? The mystery has always been part of James’ appeal – here is a man who has claimed to sleep only four hours a night, or to have built or heavily modified all of his hardware, or to be sitting on hundreds if not thousands of unreleased tracks, among other tall tales.

Around 2014, something flipped with Richard D. James. After releasing Syro, his first album in 13 years as Aphex Twin, he unleashed the floodgates with a massive hard drive dump onto SoundCloud – seems he wasn’t lying about all those tracks after all. Following up with this, today you can see the debut of a custom Bleep store for Aphex Twin, including loads of unreleased bonus tracks to go with his albums.

Of most interest to the nerds, however, has got to be this seemingly innocuous video, in which we get a trollingly-effected screencast video of Drukqs track “Vordhosbn”, playing out in the vintage tracker PlayerPro. James had previously identified PlayerPro as his main environment for making Drukqs – now we have video of it in action:

So, there we have it. A classic Aphex Twin track with the curtain drawn up. What can we learn from this video? A few things:

• PlayerPro’s tracks were all monophonic, so the chords in “Vordhosbn” had to be made using multiple tracks
• As expected with a tracker, it’s largely built from samples – likely from James’ substantial hardware collection
• Hey, those oscilloscopes and spectral displays are fun

Perhaps what’s best about this video is that it shows an Aphex classic for what it is – a track, composed in much the same way as any other electronic musician might do it. It doesn’t detract from the special qualities of Aphex’s music, but it does show us what was really going on behind all the mystery – music-making.

### Keep Track of It

It’s worth spending a moment to celebrate trackers. Long before the days of piano rolls, trackers were the best way to make intricate sequences using a computer. YouTube is riddled with classic jungle tracks from the mid-90s using software like OctaMed:

For a dedicated community, trackers are still the way to go. And there’s no better tracker around now than Renoise – whose developers have done a fantastic job bringing the tracker workflow into the 21st century. Check out this video of Venetian Snares’ “Vache” done in Renoise:

Like most trackers, Renoise has something of a steep learning curve to get all the key commands right; once you’re there, however, you’ll find it to be a very nimble environment for wild micro-edits and crazy sequences. There’s definitely a reason why it remains a tool of choice for breakcore producers!

Do you use a tracker? What do you think of the workflow? What’s the best way for someone to get started with a tracker? Let us know in the comments!

Ed.: PlayerPro is available as free software for Mac, Windows, Linux … and yes, even FreeBSD.

https://sourceforge.net/projects/playerpro/

Returning CDM contributor David Abravanel is a marketer, musician, and technologist living in New York. He loves that shiny digital crunch. Follow him at http://dhla.me

The post Aphex Twin gave us a peek inside a 90s classic. Here’s what we learned. appeared first on CDM Create Digital Music.

# Video of my casync Presentation @ kinvolk

The great folks at kinvolk have uploaded a video of my casync presentation at their offices last week.

The slides are available as well.

Enjoy!

## July 16, 2017

### GStreamer News

#### Orc 0.4.27 bug-fix release

The GStreamer team is pleased to announce another maintenance bug-fix release of liborc, the Optimized Inner Loop Runtime Compiler. Main changes since the previous release:

• sse: preserve non volatile sse registers, needed for MSVC
• x86: don't hard-code register size to zero in orc_x86_emit_*() functions
• Fix incorrect asm generation on 64-bit Windows when building with MSVC
• Support build using the Meson build system

## July 15, 2017

### GStreamer News

#### GStreamer 1.12.2 stable release (binaries)

Pre-built binary images of the 1.12.2 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

## July 14, 2017

### open-source – CDM Create Digital Music

SoundCloud’s financial turmoil has prompted users to consider, what would happen if the service were switched off? Would you lose some of your own music?

Frankly, we all should have been thinking about that sooner. Clarification: To be very clear: there is no reason you should ever have a file that you care about in just one location, no matter how secure and reliable you imagine that location may be. Key files are best kept in at least one online backup and in at least one locally accessible location (so you can get at it even without a fast connection).

There’s also no reason at this point to think SoundCloud is going to disconnect without warning – or indeed any indication from SoundCloud executives, publicly or privately, that they expect the service is going away. While recent staff cuts were painful for the whole organization, both those who remained and those who left, every suggestion is that the service is going to continue.

SoundCloud publicly has said as much. (Though, sorry – SoundCloud, you really shouldn’t be surprised. Vague messaging, no solid numbers on revenue, and a tendency not to go on record and talk to the press have made apocalyptic leaks the main picture people get of the company. In a week when you cut nearly half your staff and have limited explanation of what your plan is, then yeah, you wind up having to use the Twitter airhorn because people will panic.)

But the question of what’s happening to SoundCloud is immaterial. If you’ve got content that’s on SoundCloud and nowhere else, you’re crazy. This is really more like a wake up call: always, always have redundancy redundancy..

The reality is, with any cloud service, you’re trusting someone else with your data, and your ability to get at that data is dependent on a single login. You might well be the failure point, if you lock yourself out of your own account or if someone else compromises it.

There’s almost never a scenario, then, where it makes sense to have something you care about in just one place, no matter how secure that place is. Redundancy neatly saves you from having to plan for every contingency.

Okay, so … yeah, if you are then nervous about some music you care about being on SoundCloud and aren’t sure if it’s in fact backed up someplace else, you really should go grab it.

Here’s one open source tool (hosted on GitHub, too) that downloads music.

(DownThemAll, the Firefox add-on, also springs to mind.)

Two services offering similar features are hoping they can attract SoundCloud users by helping them migrate their accounts automatically. (I don’t know what the audio fidelity of that copy is, if it includes the original file; I have to test this – and test whether these offerings really boast a significant competitive advantage.)
https://www.orfium.com/
http://hearthis.at

Could someone create a public mirror of the service? Yes, though – it wouldn’t be cheap. Jason Scott (of Internet Archive fame) tweets that it could cost up to $2 million, based on the amount of data: (Anybody want to call Martin Shkreli? No?) My hope is that SoundCloud does survive independently. Any acquisition would likewise be crazy not to maintain users and content; that’s the whole unique value proposition of the service, and there’s still nothing else quite like it. (The fact that there’s nothing quite like it, though, may give you pause on a number of levels.) My guess is that the number of CDM readers and creators is far from enough to overload a service built to stream to millions of users, so I feel reasonably safe endorsing this use. That said, of course, SoundClouders also read CDM, so they might choose to limit or slow API access. Let’s see. My advice, though: do grab the stuff you hold dear. Put it on an easily accessible drive. And make sure the media folders on that drive also have an automated backup – I really like cloud backup services like Crashdrive and Backblaze (or, if you have a server, your own scripts). But the best backup plan is one that you set and forget, one you only have to think about when you need it, and one that will be there in that instance. Let us know if you find a better workflow here. Thanks to Tom Whitwell of Music thing for raising this and for the above open source tip. I expect … this may generate some comments. Shoot. The post Here’s how to download your own music from SoundCloud, just in case appeared first on CDM Create Digital Music. ### GStreamer News #### GStreamer 1.12.2 stable release The GStreamer team is pleased to announce the second bugfix release in the stable 1.12 release series of your favourite cross-platform multimedia framework! This release only contains bugfixes and it should be safe to update from 1.12.x. See /releases/1.12/ for the full release notes. Binaries for Android, iOS, Mac OS X and Windows will be available shortly. Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx. ## July 11, 2017 ### MOD Devices Blog #### MOD Duo 1.4 Update Now Available Dearest community, After several weeks of testing, our latest software update is available! This one took a bit longer since the testing period largely involved the Beta testing of our first peripheral, the footswitch extension (soon to receive its official name – stay tuned!), and also of the Arduino shield. As usual, you can upgrade your MOD Duo by clicking on the update icon on the bottom right-hand corner, then on ‘Download’ and finally ‘Upgrade Now’. Wait for a few minutes while the MOD updates itself automatically and enjoy your added features. Here’s the rundown of release 1.4: #### Control Chain Control Chain is MOD’s custom way of doing external devices. It is an open standard (including hardware, communication protocol, cables and connectors). You can do with Control Chain what the MOD Duo’s hardware actuators are doing right now. Comparing to MIDI, Control Chain is way more powerful. For example, instead of using hard-coded values as MIDI does, Control Chain has what is called device descriptor and its assignment (or mapping) message contains the full information about the parameter being assigned, such as parameter name, absolute value, range and any other data. Having all that information on the device side allows developers to create powerful peripherals that can, for example, show the absolute parameter value on a display, use different LED colours to indicate a specific state, etc. And remember: you can daisy chain up to 4 Control Chain peripherals to your MOD Duo! You can read more about Control Chain here. #### Usability Changes Some small but very handy usability changes were made, following user requests. These include: • It’s now possible to MIDI learn using pitchbend • You can change parameter ranges without having to re-learn a MIDI CC • You can delete the initial/first pedalboard preset (to better organise your “scenes”) • And we’ve also reduced the CPU usage with control-output intensive plugins. #### Web Interface • Plugins now have an information icon on top of them on the builder, that shows their info when clicked (they hide when the screen is too small) • The Duo’s own actuators now have the “MOD:” prefix to differentiate them from those of Control Chain devices • You can now always close addressing and pedalboard presets dialogues with the “ESC” key, independent of focus There’s also quite a few more changes and tweaks. Visit our changelog on the wiki to see all changes since v1.3.2. That’s it! The next upgrade is already being tested, lots of cool new features on the horizon… Remember: many of these tweaks and new features were added because of your comments on our forum. So, keep making sweet music with your MOD Duos and let us know of any issues or improvements you’d desire! ## July 10, 2017 ### GStreamer News #### GStreamer Conference 2017 - Call for Papers This is a formal call for papers (talks) for the GStreamer Conference 2017, which will take place on 21-22 October 2017 in Prague (Czech Republic), just before the Embedded Linux Conference Europe (ELCE). The GStreamer Conference is a conference for developers, community members, decision-makers, industry partners, and anyone else interested in the GStreamer multimedia framework and open source multimedia. The call for papers is now open and talk proposals can be submitted. You can find more details about the conference on the GStreamer Conference 2017 web page. Talk slots will be available in varying durations from 20 minutes up to 45 minutes. Whatever you're doing or planning to do with GStreamer, we'd like to hear from you! We also plan on having another session with short lightning talks / demos / showcase talks for those who just want to show what they've been working on or do a mini-talk instead of a full-length talk. The deadline for talk submissions is Sunday 13 August 2017, 23:59 UTC. We hope to see you in Prague! ## July 05, 2017 ### blog4 #### new Notstandskomitee music video First official video for the new album The Golden Times by Notstandskomitee, made for the track Exhaust. Listen to the album at https://notstandskomitee.bandcamp.com ## July 04, 2017 ### fundamental code #### Linux &amp; Multi-Screen Touch Screen Setups While working on the Zyn-Fusion UI I ended up getting a touch screen to help with the testing process. After getting the screen, buying several incorrect HDMI cables, and setting up the screen I found out that the touch events weren’t working as expected. In fact they were often showing up on the wrong screen. If I disabled my primary monitor and only used the touch screen, then events were spot on, so this was only a multi-monitor setup issue. So, what caused the problem and how can it be fixed? Well, by default the mouse/touch events which were emitted by the new screen were scaled to the total available area treating multiple screens as a single larger screen. Fortunately X11 provides one solution through xinput. Just running the xinput tool lists out a collection of devices which provide mouse and keyboard events to X11. mark@cvar:~$ xinput
| Virtual core pointer                          id=2    [master pointer  (3)]
|   > Virtual core XTEST pointer                id=4    [slave  pointer  (2)]
|   > PixArt USB Optical Mouse                  id=8    [slave  pointer  (2)]
|   > ILITEK Multi-Touch-V3004                  id=11   [slave  pointer  (2)]
| Virtual core keyboard                         id=3    [master keyboard (2)]
> Virtual core XTEST keyboard               id=5    [slave  keyboard (3)]
> Power Button                              id=6    [slave  keyboard (3)]
> Power Button                              id=7    [slave  keyboard (3)]
> AT Translated Set 2 keyboard              id=9    [slave  keyboard (3)]
> Speakup                                   id=10   [slave  keyboard (3)]

In this case the monitor is device 11 which has it’s own set of properties.

mark@cvar:~xinput list-props 11 Device 'ILITEK Multi-Touch-V3004': Device Enabled (152): 1 Coordinate Transformation Matrix (154): 1.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 1.000000 Device Accel Profile (282): 0 Device Accel Constant Deceleration (283): 1.000000 Device Accel Adaptive Deceleration (284): 1.000000 Device Accel Velocity Scaling (285): 10.000000 Device Product ID (272): 8746, 136 Device Node (273): "/dev/input/event13" Evdev Axis Inversion (286): 0, 0 Evdev Axis Calibration (287): <no items> Evdev Axes Swap (288): 0 Axis Labels (289): "Abs MT Position X" (689), "Abs MT Position Y" (690), "None" (0), "None" (0) Button Labels (290): "Button Unknown" (275), "Button Unknown" (275), "Button Unknown" (275), "Button Wheel Up" (158), "Button Wheel Down" (159) Evdev Scrolling Distance (291): 0, 0, 0 Evdev Middle Button Emulation (292): 0 Evdev Middle Button Timeout (293): 50 Evdev Third Button Emulation (294): 0 Evdev Third Button Emulation Timeout (295): 1000 Evdev Third Button Emulation Button (296): 3 Evdev Third Button Emulation Threshold (297): 20 Evdev Wheel Emulation (298): 0 Evdev Wheel Emulation Axes (299): 0, 0, 4, 5 Evdev Wheel Emulation Inertia (300): 10 Evdev Wheel Emulation Timeout (301): 200 Evdev Wheel Emulation Button (302): 4 Evdev Drag Lock Buttons (303): 0 Notably xinput provides a property to describe a coordinate transformation which can be used to remap the x and y values of the cursor events. The transformation matrix here is a 3x3 matrix used to transform 2D coordinates and is a fairly common sight in computer graphics. It translates from $$(x,y)$$ to $$(x',y')$$ as defined by: $$\begin{bmatrix} x' \\ y' \\ 1 \end{bmatrix} = \begin{bmatrix} a & b & c\\ d & e & f\\ h & i & j \end{bmatrix} \times \begin{bmatrix} x \\ y \\ 1 \end{bmatrix}$$ The transformation matrix allows for stretching, shearing, translation, flipping, scaling, etc. For the sorts of problems you may see introduced by a multi-monitor setup I would only expect people to care about translating ($$t$$) the events and then re-scaling ($$s$$) them to the offset area. Using these two parameters, the transformation matrix equation is simplified to: $$\begin{bmatrix} x' \\ y' \\ 1 \end{bmatrix} = \begin{bmatrix} s_x & 0 & t_x\\ 0 & s_y & s_y\\ 0 & 0 & 1 \end{bmatrix} \times \begin{bmatrix} x \\ y \\ 1 \end{bmatrix}$$ Or without the matrix representation: \begin{aligned} x' &= s_x x + t_x\\ y' &= s_y y + t_y \end{aligned} With that background out of the way, let’s see how this applied to my specific monitor setup: As I mentioned earlier the touch events were scaled to the dimensions of the larger virtual screen. Since the touch screen is larger this means the y axis is mapped correctly and the x axis is mapped for pixels 0..3200 (both screens) instead of pixels 1281..3200 (left screen only). Since the xinput scales theses parameters based upon the total screen size, we can divide by the total x size (3200) to learn that the x axis maps to 0..1 rather than 0.4..1.0. Solving the above equations we can remap the touch events using $$s_x=0.6$$ and $$t_x=0.4$$. This results in the transformation matrix: $$\begin{bmatrix} 0.6 & 0 & 0.4\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix}$$ The last step is to provide the new transformation matrix to xinput: xinput set-prop 11 'Coordinate Transformation Matrix' 0.6 0 0.4 0 1 0 0 0 1 Now cursor events map onto the correct screen accurately and the code to change the xinput properties can be easily put into a shell script. ## June 30, 2017 ### rncbc.org #### Qtractor 0.8.3 - The Stickiest Tauon is out! Howdy! Qtractor 0.8.3 (stickiest tauon) is out! Changes for this mostly just a bug-fix beta release:: • Make sure any just recorded clip filename is not reused while over the same track and session. (CRITICAL) • LV2 Plug-in worker/schedule interface ring-buffer sizes have been increased to 4KB. • Fixed track-name auto-incremental numbering suffix when modifying any other track property. • WSOLA vs. (lib)Rubberband time-stretching options are now individualized on a per audio clip basis. • Long overdue, some brand new and fundamental icons revamp. • Fixed a tempo-map node add/update/remove rescaling with regard to clip-lengths and automation/curve undo/redo. • Fixed a potential Activate automation/curve index clash, or aliasing, for any plug-ins that change upstream their parameter count or index order, on sessions saved with the old plug-in versions and vice-versa. Description: Qtractor is an audio/MIDI multi-track sequencer application written in C++ with the Qt framework. Target platform is Linux, where the Jack Audio Connection Kit (JACK) for audio and the Advanced Linux Sound Architecture (ALSA) for MIDI are the main infrastructures to evolve as a fairly-featured Linux desktop audio workstation GUI, specially dedicated to the personal home-studio. Website: http://qtractor.org http://qtractor.sourceforge.net Project page: http://sourceforge.net/projects/qtractor Downloads: http://sourceforge.net/projects/qtractor/files Git repos: http://git.code.sf.net/p/qtractor/code https://github.com/rncbc/qtractor.git https://gitlab.com/rncbc/qtractor.git https://bitbucket.org/rncbc/qtractor.git Wiki (help still wanted!): http://sourceforge.net/p/qtractor/wiki/ License: Qtractor is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later. Enjoy && Keep the fun, always. ## June 28, 2017 ### blog4 #### TMS concert in Hamburg 1.7.2017 after Saturdays blast of a noise night at XB Liebig, getting ready for the next gig at Primal Uproar in Hamburg, where TMS will perform on Saturday 1.7.2017 https://www.tixforgigs.com/site/Pages/Shop/ShowEvent.aspx?ID=18672 we put a recording of the XB Liebig concert on Mixcloud: ## June 27, 2017 ### Pid Eins #### mkosi — A Tool for Generating OS Images # Introducing mkosi After blogging about casync I realized I never blogged about the mkosi tool that combines nicely with it. mkosi has been around for a while already, and its time to make it a bit better known. mkosi stands for Make Operating System Image, and is a tool for precisely that: generating an OS tree or image that can be booted. Yes, there are many tools like mkosi, and a number of them are quite well known and popular. But mkosi has a number of features that I think make it interesting for a variety of use-cases that other tools don't cover that well. # What is mkosi? What are those use-cases, and what does mkosi precisely set apart? mkosi is definitely a tool with a focus on developer's needs for building OS images, for testing and debugging, but also for generating production images with cryptographic protection. A typical use-case would be to add a mkosi.default file to an existing project (for example, one written in C or Python), and thus making it easy to generate an OS image for it. mkosi will put together the image with development headers and tools, compile your code in it, run your test suite, then throw away the image again, and build a new one, this time without development headers and tools, and install your build artifacts in it. This final image is then "production-ready", and only contains your built program and the minimal set of packages you configured otherwise. Such an image could then be deployed with casync (or any other tool of course) to be delivered to your set of servers, or IoT devices or whatever you are building. mkosi is supposed to be legacy-free: the focus is clearly on today's technology, not yesteryear's. Specifically this means that we'll generate GPT partition tables, not MBR/DOS ones. When you tell mkosi to generate a bootable image for you, it will make it bootable on EFI, not on legacy BIOS. The GPT images generated follow specifications such as the Discoverable Partitions Specification, so that /etc/fstab can remain unpopulated and tools such as systemd-nspawn can automatically dissect the image and boot from them. So, let's have a look on the specific images it can generate: 1. Raw GPT disk image, with ext4 as root 2. Raw GPT disk image, with btrfs as root 3. Raw GPT disk image, with a read-only squashfs as root 4. A plain directory on disk containing the OS tree directly (this is useful for creating generic container images) 5. A btrfs subvolume on disk, similar to the plain directory 6. A tarball of a plain directory When any of the GPT choices above are selected, a couple of additional options are available: 1. A swap partition may be added in 2. The system may be made bootable on EFI systems 3. Separate partitions for /home and /srv may be added in 4. The root, /home and /srv partitions may be optionally encrypted with LUKS 5. The root partition may be protected using dm-verity, thus making offline attacks on the generated system hard 6. If the image is made bootable, the dm-verity root hash is automatically added to the kernel command line, and the kernel together with its initial RAM disk and the kernel command line is optionally cryptographically signed for UEFI SecureBoot Note that mkosi is distribution-agnostic. It currently can build images based on the following Linux distributions: 1. Fedora 2. Debian 3. Ubuntu 4. ArchLinux 5. openSUSE Note though that not all distributions are supported at the same feature level currently. Also, as mkosi is based on dnf --installroot, debootstrap, pacstrap and zypper, and those packages are not packaged universally on all distributions, you might not be able to build images for all those distributions on arbitrary host distributions. The GPT images are put together in a way that they aren't just compatible with UEFI systems, but also with VM and container managers (that is, at least the smart ones, i.e. VM managers that know UEFI, and container managers that grok GPT disk images) to a large degree. In fact, the idea is that you can use mkosi to build a single GPT image that may be used to: 1. Boot on bare-metal boxes 2. Boot in a VM 3. Boot in a systemd-nspawn container 4. Directly run a systemd service off, using systemd's RootImage= unit file setting Note that in all four cases the dm-verity data is automatically used if available to ensure the image is not tampered with (yes, you read that right, systemd-nspawn and systemd's RootImage= setting automatically do dm-verity these days if the image has it.) # Mode of Operation The simplest usage of mkosi is by simply invoking it without parameters (as root): # mkosi  Without any configuration this will create a GPT disk image for you, will call it image.raw and drop it in the current directory. The distribution used will be the same one as your host runs. Of course in most cases you want more control about how the image is put together, i.e. select package sets, select the distribution, size partitions and so on. Most of that you can actually specify on the command line, but it is recommended to instead create a couple of mkosi.SOMETHING files and directories in some directory. Then, simply change to that directory and run mkosi without any further arguments. The tool will then look in the current working directory for these files and directories and make use of them (similar to how make looks for a Makefile…). Every single file/directory is optional, but if they exist they are honored. Here's a list of the files/directories mkosi currently looks for:

1. mkosi.default — This is the main configuration file, here you can configure what kind of image you want, which distribution, which packages and so on.

2. mkosi.extra/ — If this directory exists, then mkosi will copy everything inside it into the images built. You can place arbitrary directory hierarchies in here, and they'll be copied over whatever is already in the image, after it was put together by the distribution's package manager. This is the best way to drop additional static files into the image, or override distribution-supplied ones.

3. mkosi.build — This executable file is supposed to be a build script. When it exists, mkosi will build two images, one after the other in the mode already mentioned above: the first version is the build image, and may include various build-time dependencies such as a compiler or development headers. The build script is also copied into it, and then run inside it. The script should then build whatever shall be built and place the result in $DESTDIR (don't worry, popular build tools such as Automake or Meson all honor $DESTDIR anyway, so there's not much to do here explicitly). It may also run a test suite, or anything else you like. After the script finished, the build image is removed again, and a second image (the final image) is built. This time, no development packages are included, and the build script is not copied into the image again — however, the build artifacts from the first run (i.e. those placed in $DESTDIR) are copied into the image. 4. mkosi.postinst — If this executable script exists, it is invoked inside the image (inside a systemd-nspawn invocation) and can adjust the image as it likes at a very late point in the image preparation. If mkosi.build exists, i.e. the dual-phased development build process used, then this script will be invoked twice: once inside the build image and once inside the final image. The first parameter passed to the script clarifies which phase it is run in. 5. mkosi.nspawn — If this file exists, it should contain a container configuration file for systemd-nspawn (see systemd.nspawn(5) for details), which shall be shipped along with the final image and shall be included in the check-sum calculations (see below). 6. mkosi.cache/ — If this directory exists, it is used as package cache directory for the builds. This directory is effectively bind mounted into the image at build time, in order to speed up building images. The package installers of the various distributions will place their package files here, so that subsequent runs can reuse them. 7. mkosi.passphrase — If this file exists, it should contain a pass-phrase to use for the LUKS encryption (if that's enabled for the image built). This file should not be readable to other users. 8. mkosi.secure-boot.crt and mkosi.secure-boot.key should be an X.509 key pair to use for signing the kernel and initrd for UEFI SecureBoot, if that's enabled. # How to use it So, let's come back to our most trivial example, without any of the mkosi.$SOMETHING files around:

# mkosi


As mentioned, this will create a build file image.raw in the current directory. How do we use it? Of course, we could dd it onto some USB stick and boot it on a bare-metal device. However, it's much simpler to first run it in a container for testing:

# systemd-nspawn -bi image.raw


And there you go: the image should boot up, and just work for you.

Now, let's make things more interesting. Let's still not use any of the mkosi.$SOMETHING files around: # mkosi -t raw_btrfs --bootable -o foobar.raw # systemd-nspawn -bi foobar.raw  This is similar as the above, but we made three changes: it's no longer GPT + ext4, but GPT + btrfs. Moreover, the system is made bootable on UEFI systems, and finally, the output is now called foobar.raw. Because this system is bootable on UEFI systems, we can run it in KVM: qemu-kvm -m 512 -smp 2 -bios /usr/share/edk2/ovmf/OVMF_CODE.fd -drive format=raw,file=foobar.raw  This will look very similar to the systemd-nspawn invocation, except that this uses full VM virtualization rather than container virtualization. (Note that the way to run a UEFI qemu/kvm instance appears to change all the time and is different on the various distributions. It's quite annoying, and I can't really tell you what the right qemu command line is to make this work on your system.) Of course, it's not all raw GPT disk images with mkosi. Let's try a plain directory image: # mkosi -d fedora -t directory -o quux # systemd-nspawn -bD quux  Of course, if you generate the image as plain directory you can't boot it on bare-metal just like that, nor run it in a VM. A more complex command line is the following: # mkosi -d fedora -t raw_squashfs --checksum --xz --package=openssh-clients --package=emacs  In this mode we explicitly pick Fedora as the distribution to use, ask mkosi to generate a compressed GPT image with a root squashfs, compress the result with xz, and generate a SHA256SUMS file with the hashes of the generated artifacts. The package will contain the SSH client as well as everybody's favorite editor. Now, let's make use of the various mkosi.$SOMETHING files. Let's say we are working on some Automake-based project and want to make it easy to generate a disk image off the development tree with the version you are hacking on. Create a configuration file:

# cat > mkosi.default <<EOF
[Distribution]
Distribution=fedora
Release=24

[Output]
Format=raw_btrfs
Bootable=yes

[Packages]
# The packages to appear in both the build and the final image
Packages=openssh-clients httpd
# The packages to appear in the build image, but absent from the final image
BuildPackages=make gcc libcurl-devel
EOF


And let's add a build script:

# cat > mkosi.build <<EOF
#!/bin/sh
./autogen.sh
./configure --prefix=/usr
make -j nproc
make install
EOF
# chmod +x mkosi.build


And with all that in place we can now build our project into a disk image, simply by typing:

# mkosi


Let's try it out:

# systemd-nspawn -bi image.raw


Of course, if you do this you'll notice that building an image like this can be quite slow. And slow build times are actively hurtful to your productivity as a developer. Hence let's make things a bit faster. First, let's make use of a package cache shared between runs:

# mkdir mkosi.cache


Building images now should already be substantially faster (and generate less network traffic) as the packages will now be downloaded only once and reused. However, you'll notice that unpacking all those packages and the rest of the work is still quite slow. But mkosi can help you with that. Simply use mkosi's incremental build feature. In this mode mkosi will make a copy of the build and final images immediately before dropping in your build sources or artifacts, so that building an image becomes a lot quicker: instead of always starting totally from scratch a build will now reuse everything it can reuse from a previous run, and immediately begin with building your sources rather than the build image to build your sources in. To enable the incremental build feature use -i:

# mkosi -i


Note that if you use this option, the package list is not updated anymore from your distribution's servers, as the cached copy is made after all packages are installed, and hence until you actually delete the cached copy the distribution's network servers aren't contacted again and no RPMs or DEBs are downloaded. This means the distribution you use becomes "frozen in time" this way. (Which might be a bad thing, but also a good thing, as it makes things kinda reproducible.)

Of course, if you run mkosi a couple of times you'll notice that it won't overwrite the generated image when it already exists. You can either delete the file yourself first (rm image.raw) or let mkosi do it for you right before building a new image, with mkosi -f. You can also tell mkosi to not only remove any such pre-existing images, but also remove any cached copies of the incremental feature, by using -f twice.

I wrote mkosi originally in order to test systemd, and quickly generate a disk image of various distributions with the most current systemd version from git, without all that affecting my host system. I regularly use mkosi for that today, in incremental mode. The two commands I use most in that context are:

# mkosi -if && systemd-nspawn -bi image.raw


And sometimes:

# mkosi -iff && systemd-nspawn -bi image.raw


The latter I use only if I want to regenerate everything based on the very newest set of RPMs provided by Fedora, instead of a cached snapshot of it.

BTW, the mkosi files for systemd are included in the systemd git tree: mkosi.default and mkosi.build. This way, any developer who wants to quickly test something with current systemd git, or wants to prepare a patch based on it and test it can check out the systemd repository and simply run mkosi in it and a few minutes later he has a bootable image he can test in systemd-nspawn or KVM. casync has similar files: mkosi.default, mkosi.build.

# Random Interesting Features

1. As mentioned already, mkosi will generate dm-verity enabled disk images if you ask for it. For that use the --verity switch on the command line or Verity= setting in mkosi.default. Of course, dm-verity implies that the root volume is read-only. In this mode the top-level dm-verity hash will be placed along-side the output disk image in a file named the same way, but with the .roothash suffix. If the image is to be created bootable, the root hash is also included on the kernel command line in the roothash= parameter, which current systemd versions can use to both find and activate the root partition in a dm-verity protected way. BTW: it's a good idea to combine this dm-verity mode with the raw_squashfs image mode, to generate a genuinely protected, compressed image suitable for running in your IoT device.

2. As indicated above, mkosi can automatically create a check-sum file SHA256SUMS for you (--checksum) covering all the files it outputs (which could be the image file itself, a matching .nspawn file using the mkosi.nspawn file mentioned above, as well as the .roothash file for the dm-verity root hash.) It can then optionally sign this with gpg (--sign). Note that systemd's machinectl pull-tar and machinectl pull-raw command can download these files and the SHA256SUMS file automatically and verify things on download. With other words: what mkosi outputs is perfectly ready for downloads using these two systemd commands.

3. As mentioned, mkosi is big on supporting UEFI SecureBoot. To make use of that, place your X.509 key pair in two files mkosi.secureboot.crt and mkosi.secureboot.key, and set SecureBoot= or --secure-boot. If so, mkosi will sign the kernel/initrd/kernel command line combination during the build. Of course, if you use this mode, you should also use Verity=/--verity=, otherwise the setup makes only partial sense. Note that mkosi will not help you with actually enrolling the keys you use in your UEFI BIOS.

4. mkosi has minimal support for GIT checkouts: when it recognizes it is run in a git checkout and you use the mkosi.build script stuff, the source tree will be copied into the build image, but will all files excluded by .gitignore removed.

5. There's support for encryption in place. Use --encrypt= or Encrypt=. Note that the UEFI ESP is never encrypted though, and the root partition only if explicitly requested. The /home and /srv partitions are unconditionally encrypted if that's enabled.

6. Images may be built with all documentation removed.

7. The password for the root user and additional kernel command line arguments may be configured for the image to generate.

# Minimum Requirements

Current mkosi requires Python 3.5, and has a number of dependencies, listed in the README. Most notably you need a somewhat recent systemd version to make use of its full feature set: systemd 233. Older versions are already packaged for various distributions, but much of what I describe above is only available in the most recent release mkosi 3.

The UEFI SecureBoot support requires sbsign which currently isn't available in Fedora, but there's a COPR.

# Future

It is my intention to continue turning mkosi into a tool suitable for:

1. Testing and debugging projects
2. Building images for secure devices
3. Building portable service images
4. Building images for secure VMs and containers

One of the biggest goals I have for the future is to teach mkosi and systemd/sd-boot native support for A/B IoT style partition setups. The idea is that the combination of systemd, casync and mkosi provides generic building blocks for building secure, auto-updating devices in a generic way from, even though all pieces may be used individually, too.

# FAQ

1. Why are you reinventing the wheel again? This is exactly like $SOMEOTHERPROJECT! — Well, to my knowledge there's no tool that integrates this nicely with your project's development tree, and can do dm-verity and UEFI SecureBoot and all that stuff for you. So nope, I don't think this exactly like $SOMEOTHERPROJECT, thank you very much.

2. What about creating MBR/DOS partition images? — That's really out of focus to me. This is an exercise in figuring out how generic OSes and devices in the future should be built and an attempt to commoditize OS image building. And no, the future doesn't speak MBR, sorry. That said, I'd be quite interested in adding support for booting on Raspberry Pi, possibly using a hybrid approach, i.e. using a GPT disk label, but arranging things in a way that the Raspberry Pi boot protocol (which is built around DOS partition tables), can still work.

3. Is this portable? — Well, depends what you mean by portable. No, this tool runs on Linux only, and as it uses systemd-nspawn during the build process it doesn't run on non-systemd systems either. But then again, you should be able to create images for any architecture you like with it, but of course if you want the image bootable on bare-metal systems only systems doing UEFI are supported (but systemd-nspawn should still work fine on them).

4. Where can I get this stuff? — Try GitHub. And some distributions carry packaged versions, but I think none of them the current v3 yet.

5. Is this a systemd project? — Yes, it's hosted under the systemd GitHub umbrella. And yes, during run-time systemd-nspawn in a current version is required. But no, the code-bases are separate otherwise, already because systemd is a C project, and mkosi Python.

6. Requiring systemd 233 is a pretty steep requirement, no? — Yes, but the feature we need kind of matters (systemd-nspawn's --overlay= switch), and again, this isn't supposed to be a tool for legacy systems.

7. Can I run the resulting images in LXC or Docker? — Humm, I am not an LXC nor Docker guy. If you select directory or subvolume as image type, LXC should be able to boot the generated images just fine, but I didn't try. Last time I looked, Docker doesn't permit running proper init systems as PID 1 inside the container, as they define their own run-time without intention to emulate a proper system. Hence, no I don't think it will work, at least not with an unpatched Docker version. That said, again, don't ask me questions about Docker, it's not precisely my area of expertise, and quite frankly I am not a fan. To my knowledge neither LXC nor Docker are able to run containers directly off GPT disk images, hence the various raw_xyz image types are definitely not compatible with either. That means if you want to generate a single raw disk image that can be booted unmodified both in a container and on bare-metal, then systemd-nspawn is the container manager to go for (specifically, its -i/--image= switch).

# Should you care? Is this a tool for you?

Well, that's up to you really.

If you hack on some complex project and need a quick way to compile and run your project on a specific current Linux distribution, then mkosi is an excellent way to do that. Simply drop the mkosi.default and mkosi.build files in your git tree and everything will be easy. (And of course, as indicated above: if the project you are hacking on happens to be called systemd or casync be aware that those files are already part of the git tree — you can just use them.)

If you hack on some embedded or IoT device, then mkosi is a great choice too, as it will make it reasonably easy to generate secure images that are protected against offline modification, by using dm-verity and UEFI SecureBoot.

If you are an administrator and need a nice way to build images for a VM or systemd-nspawn container, or a portable service then mkosi is an excellent choice too.

If you care about legacy computers, old distributions, non-systemd init systems, old VM managers, Docker, … then no, mkosi is not for you, but there are plenty of well-established alternatives around that cover that nicely.

And never forget: mkosi is an Open Source project. We are happy to accept your patches and other contributions.

Oh, and one unrelated last thing: don't forget to submit your talk proposal and/or buy a ticket for All Systems Go! 2017 in Berlin — the conference where things like systemd, casync and mkosi are discussed, along with a variety of other Linux userspace projects used for building systems.

### Audio – Stefan Westerfeld's blog

#### 27.06.2016 beast-0.11.0 released

Beast is a music composition and modular synthesis application. beast-0.11.0 is now available at beast.testbit.eu. Support for Soundfont (.sf2) files has been added. On multicore CPUs, Beast now uses all cores for synthesis, which improves performance. Debian packages also have been added, so installation should be very easy on Debian-like systems. And as always, lots of other improvements and bug fixes went into Beast.

Update: I made a screencast of Beast which shows the basics.

### autostatic.com

#### RPi 3 and the real time kernel

As a beta tester for MOD I thought it would be cool to play around with netJACK which is supported on the MOD Duo. The MOD Duo can run as a JACK master and you can connect any JACK slave to it as long as it runs a recent version of JACK2. This opens a plethora of possibilities of course. I’m thinking about building a kind of sidecar device to offload some stuff to using netJACK, think of synths like ZynAddSubFX or other CPU greedy plugins like fat1.lv2. But more on that in a later blog post.

So first I need to set up a sidecar device and I sacrificed one of my RPi’s for that, an RPi 3. Flashed an SD card with Raspbian Jessie Lite and started to do some research on the status of real time kernels and the Raspberry Pi because I’d like to use a real time kernel to get sub 5ms system latency. I compiled real time kernels for the RPi before but you had to jump through some hoops to get those running so I hoped things would have improved somewhat. Well, that’s not the case so after having compiled a first real time kernel the RPi froze as soon as I tried to runapt-get install rt-tests. After having applied a patch to fix how the RPi folks implemented the FIQ system the kernel compiled without issues:

Linux raspberrypi 4.9.33-rt23-v7+ #2 SMP PREEMPT RT Sun Jun 25 09:45:58 CEST 2017 armv7l GNU/Linux

And the RPi seems to run stable with acceptable latencies:

Histogram of the latency on the RPi with a real time kernel during 300000 cyclictest loops

So that’s a maximum latency of 75 µs, not bad. I also spotted some higher values around 100 but that’s still okay for this project. The histogram was created with mklatencyplot.bash. I used a different invocation of cyclictest though:

cyclictest -Sm -p 80 -n -i 500 -l 300000

And I ran hackbench in the background to create some load on the RPi:

(while true; do hackbench > /dev/null; done) &

Compiling a real time kernel for the RPi is still not a trivial thing to do and it doesn’t help that the few howto’s on the interwebs are mostly copy-paste work, incomplete and contain routines that are unclear or even unnecessary. One thing that struck me too is that the howto’s about building kernels for RPi’s running Raspbian don’t mention the make deb-pkg routine to build a real time kernel. This will create deb packages that are just so much easier to transfer and install then rsync’ing the kernel image and modules. Let’s break down how I built a real time kernel for the RPi 3.

First you’ll need to git clone the Raspberry Pi kernel repository:

git clone -b 'rpi-4.9.y' --depth 1 https://github.com/raspberrypi/linux.git

This will only clone the rpi-4.9.y branch into a directory called linux without any history so you’re not pulling in hundreds of megs of data. You will also need to clone the tools repository which contains the compiler we need to build a kernel for the Raspberry Pi:

git clone https://github.com/raspberrypi/tools.git

This will end up in the tools directory. Next step is setting some environment variables so subsequent make commands pick those up:

export KERNEL=kernel7
export ARCH=arm
export CROSS_COMPILE=/path/to/tools/arm-bcm2708/gcc-linaro-arm-linux-gnueabihf-raspbian/bin/arm-linux-gnueabihf-
export CONCURRENCY_LEVEL=$(nproc) The KERNEL variable is needed to create the initial kernel config. The ARCH variable is to indicate which architecture should be used. The CROSS_COMPILE variable indicates where the compiler can be found. The CONCURRENCY_LEVEL variable is set to the number of cores to speed up certain make routines like cleaning up or installing the modules (not the number of jobs, that is done with the -j option of make). Now that the environment variables are set we can create the initial kernel config: cd linux make bcm2709_defconfig This will create a .config inside the linux directory that holds the initial kernel configuration. Now download the real time patch set and apply it: cd .. wget https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/patch-4.9.33-rt23.patch.xz cd linux xzcat ../patch-4.9.33-rt23.patch.xz | patch -p1 Most howto’s now continue with building the kernel but that will result in a kernel that will freeze your RPi because of the FIQ system implementation that causes lock ups of the RPi when using threaded interrupts which is the case with real time kernels. That part needs to be patched so download the patch and dry-run it: cd .. wget https://www.osadl.org/monitoring/patches/rbs3s/usb-dwc_otg-fix-system-lockup-when-interrupts-are-threaded.patch cd linux patch -i ../usb-dwc_otg-fix-system-lockup-when-interrupts-are-threaded.patch -p1 --dry-run You will notice one hunk will fail, you will have to add that stanza manually so note which hunk it is for which file and at which line it should be added. Now apply the patch: patch -i ../usb-dwc_otg-fix-system-lockup-when-interrupts-are-threaded.patch -p1 And add the failed hunk manually with your favorite editor. With the FIQ patch in place we’re almost set for compiling the kernel but before we can move on to that step we need to modify the kernel configuration to enable the real time patch set. I prefer doing that with make menuconfig. Then select Kernel Features - Preemption Model - Fully Preemptible Kernel (RT) and select Exit twice. If you’re asked if you want to save your config then confirm. In the Kernel features menu you could also set the the timer frequency to 1000 Hz if you wish, apparently this could improve USB throughput on the RPi (unconfirmed, needs reference). For real time audio and MIDI this setting is irrelevant nowadays though as almost all audio and MIDI applications use the hr-timer module which has a way higher resolution. With our configuration saved we can start compiling. Clean up first, then disable some debugging options which could cause some overhead, compile the kernel and finally create ready to install deb packages: make clean scripts/config --disable DEBUG_INFO make -j$(nproc) deb-pkg

Sit back, enjoy a cuppa and when building has finished without errors deb packages should be created in the directory above the linux one. Copy the deb packages to your RPi and install them on the RPi with dpkg -i. Open up /boot/config.txt and add the following line to it:

kernel=vmlinuz-4.9.33-rt23-v7+

Now reboot your RPi and it should boot with the realtime kernel. You can check with uname -a:

Linux raspberrypi 4.9.33-rt23-v7+ #2 SMP PREEMPT RT Sun Jun 25 09:45:58 CEST 2017 armv7l GNU/Linux

Since Rasbian uses almost the same kernel source as the one we just built it is not necessary to copy any dtb files. Also running mkknlimg is not necessary anymore, the RPi boot process can handle vmlinuz files just fine.

The basis of the sidecar unit is now done. Next up is tweaking the OS and setting up netJACK.

The post RPi 3 and the real time kernel appeared first on autostatic.com.

## June 22, 2017

### GStreamer News

#### GStreamer 1.12.1 stable release (binaries)

Pre-built binary images of the 1.12.1 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

## June 21, 2017

### rncbc.org

#### Vee One Suite 0.8.3 - A Summer'17 release

Howdy!

The Vee One Suite of old-school software instruments, respectively synthv1, as a polyphonic subtractive synthesizer, samplv1, a polyphonic sampler synthesizer and drumkv1 as yet another drum-kit sampler, are into a hot Summer'17 release!

Still available in dual form:

• a pure stand-alone JACK client with JACK-session, NSM (Non Session management) and both JACK MIDI and ALSA MIDI input support;
• a LV2 instrument plug-in.

The Vee One Suite are free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

So here they go again!

## synthv1 - an old-school polyphonic synthesizer

synthv1 0.8.3 (summer'17) released!

synthv1 is an old-school all-digital 4-oscillator subtractive polyphonic synthesizer with stereo fx.

LV2 URI: http://synthv1.sourceforge.net/lv2

change-log:

• Added StartupWMClass entry to desktop file.
• Long overdue, some brand new and fundamental icons revamp.

website:
http://synthv1.sourceforge.net

http://sourceforge.net/projects/synthv1/files

git repos:
http://git.code.sf.net/p/synthv1/code
https://github.com/rncbc/synthv1.git
https://gitlab.com/rncbc/synthv1.git
https://bitbucket.org/rncbc/synthv1.git

## samplv1 - an old-school polyphonic sampler

samplv1 0.8.3 (summer'17) released!

samplv1 is an old-school polyphonic sampler synthesizer with stereo fx.

LV2 URI: http://samplv1.sourceforge.net/lv2

change-log:

• Added StartupWMClass entry to desktop file.
• Long overdue, some brand new and fundamental icons revamp.
• Play (current sample) menu item has been added to sample display right-click context-menu as for triggering it as an internal MIDI note-on/off event.

website:
http://samplv1.sourceforge.net

http://sourceforge.net/projects/samplv1/files

git repos:
http://git.code.sf.net/p/samplv1/code
https://github.com/rncbc/samplv1.git
https://gitlab.com/rncbc/samplv1.git
https://bitbucket.org/rncbc/samplv1.git

## drumkv1 - an old-school drum-kit sampler

drumkv1 0.8.3 (summer'17) released!

drumkv1 is an old-school drum-kit sampler synthesizer with stereo fx.

LV2 URI: http://drumkv1.sourceforge.net/lv2

change-log:

• Added StartupWMClass entry to desktop file.
• Long overdue, some brand new and fundamental icons revamp.
• Left-clicking on each element fake-LED now triggers it as an internal MIDI note-on/off event. Play (current element) menu item has been also added to the the element list and sample display right-click context-menu.

website:
http://drumkv1.sourceforge.net

http://sourceforge.net/projects/drumkv1/files

git repos:
http://git.code.sf.net/p/drumkv1/code
https://github.com/rncbc/drumkv1.git
https://gitlab.com/rncbc/drumkv1.git
https://bitbucket.org/rncbc/drumkv1.git

Enjoy && have fun ;)

## June 20, 2017

### Audio – Stefan Westerfeld's blog

#### 20.06.2017 spectmorph-0.3.3 released

A new version of SpectMorph, my audio morphing software is now available on www.spectmorph.org. The main improvement is that SpectMorph supports now portamento and vibrato. For VST hosts with MPE (Bitwig), the pitch of each note can be controlled by the sequencer. So sliding from a C major chord to a D minor chord is possible. There is also a new portamento/mono mode, which should work with any host.

### GStreamer News

#### GStreamer 1.12.1 stable release

The GStreamer team is pleased to announce the first bugfix release in the stable 1.12 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.12.x.

See /releases/1.12/ for the full release notes.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

## June 19, 2017

### Pid Eins

#### All Systems Go! 2017 CfP Open

The All Systems Go! 2017 Call for Participation is Now Open!

We’d like to invite presentation proposals for All Systems Go! 2017!

All Systems Go! is an Open Source community conference focused on the projects and technologies at the foundation of modern Linux systems — specifically low-level user-space technologies. Its goal is to provide a friendly and collaborative gathering place for individuals and communities working to push these technologies forward.

All Systems Go! 2017 takes place in Berlin, Germany on October 21st+22nd.

All Systems Go! is a 2-day event with 2-3 talks happening in parallel. Full presentation slots are 30-45 minutes in length and lightning talk slots are 5-10 minutes.

We are now accepting submissions for presentation proposals. In particular, we are looking for sessions including, but not limited to, the following topics:

• Low-level container executors and infrastructure
• IoT and embedded OS infrastructure
• OS, container, IoT image delivery and updating
• Building Linux devices and applications
• Low-level desktop technologies
• Networking
• System and service management
• Tracing and performance measuring
• IPC and RPC systems
• Security and Sandboxing

While our focus is definitely more on the user-space side of things, talks about kernel projects are welcome too, as long as they have a clear and direct relevance for user-space.

Please submit your proposals by September 3rd. Notification of acceptance will be sent out 1-2 weeks later.

systemd.conf will not take place this year in lieu of All Systems Go!. All Systems Go! welcomes all projects that contribute to Linux user space, which, of course, includes systemd. Thus, anything you think was appropriate for submission to systemd.conf is also fitting for All Systems Go!

# Introducing casync

In the past months I have been working on a new project: casync. casync takes inspiration from the popular rsync file synchronization tool as well as the probably even more popular git revision control system. It combines the idea of the rsync algorithm with the idea of git-style content-addressable file systems, and creates a new system for efficiently storing and delivering file system images, optimized for high-frequency update cycles over the Internet. Its current focus is on delivering IoT, container, VM, application, portable service or OS images, but I hope to extend it later in a generic fashion to become useful for backups and home directory synchronization as well (but more about that later).

The basic technological building blocks casync is built from are neither new nor particularly innovative (at least not anymore), however the way casync combines them is different from existing tools, and that's what makes it useful for a variety of use-cases that other tools can't cover that well.

# Why?

I created casync after studying how today's popular tools store and deliver file system images. To briefly name a few: Docker has a layered tarball approach, OSTree serves the individual files directly via HTTP and maintains packed deltas to speed up updates, while other systems operate on the block layer and place raw squashfs images (or other archival file systems, such as IS09660) for download on HTTP shares (in the better cases combined with zsync data).

Neither of these approaches appeared fully convincing to me when used in high-frequency update cycle systems. In such systems, it is important to optimize towards a couple of goals:

1. Most importantly, make updates cheap traffic-wise (for this most tools use image deltas of some form)
2. Put boundaries on disk space usage on servers (keeping deltas between all version combinations clients might want to run updates between, would suggest keeping an exponentially growing amount of deltas on servers)
3. Put boundaries on disk space usage on clients
4. Be friendly to Content Delivery Networks (CDNs), i.e. serve neither too many small nor too many overly large files, and only require the most basic form of HTTP. Provide the repository administrator with high-level knobs to tune the average file size delivered.
5. Simplicity to use for users, repository administrators and developers

I don't think any of the tools mentioned above are really good on more than a small subset of these points.

Specifically: Docker's layered tarball approach dumps the "delta" question onto the feet of the image creators: the best way to make your image downloads minimal is basing your work on an existing image clients might already have, and inherit its resources, maintaining full history. Here, revision control (a tool for the developer) is intermingled with update management (a concept for optimizing production delivery). As container histories grow individual deltas are likely to stay small, but on the other hand a brand-new deployment usually requires downloading the full history onto the deployment system, even though there's no use for it there, and likely requires substantially more disk space and download sizes.

OSTree's serving of individual files is unfriendly to CDNs (as many small files in file trees cause an explosion of HTTP GET requests). To counter that OSTree supports placing pre-calculated delta images between selected revisions on the delivery servers, which means a certain amount of revision management, that leaks into the clients.

Delivering direct squashfs (or other file system) images is almost beautifully simple, but of course means every update requires a full download of the newest image, which is both bad for disk usage and generated traffic. Enhancing it with zsync makes this a much better option, as it can reduce generated traffic substantially at very little cost of history/meta-data (no explicit deltas between a large number of versions need to be prepared server side). On the other hand server requirements in disk space and functionality (HTTP Range requests) are minus points for the use-case I am interested in.

(Note: all the mentioned systems have great properties, and it's not my intention to badmouth them. They only point I am trying to make is that for the use case I care about — file system image delivery with high high frequency update-cycles — each system comes with certain drawbacks.)

# Security & Reproducibility

Besides the issues pointed out above I wasn't happy with the security and reproducibility properties of these systems. In today's world where security breaches involving hacking and breaking into connected systems happen every day, an image delivery system that cannot make strong guarantees regarding data integrity is out of date. Specifically, the tarball format is famously nondeterministic: the very same file tree can result in any number of different valid serializations depending on the tool used, its version and the underlying OS and file system. Some tar implementations attempt to correct that by guaranteeing that each file tree maps to exactly one valid serialization, but such a property is always only specific to the tool used. I strongly believe that any good update system must guarantee on every single link of the chain that there's only one valid representation of the data to deliver, that can easily be verified.

# What casync Is

So much about the background why I created casync. Now, let's have a look what casync actually is like, and what it does. Here's the brief technical overview:

Encoding: Let's take a large linear data stream, split it into variable-sized chunks (the size of each being a function of the chunk's contents), and store these chunks in individual, compressed files in some directory, each file named after a strong hash value of its contents, so that the hash value may be used to as key for retrieving the full chunk data. Let's call this directory a "chunk store". At the same time, generate a "chunk index" file that lists these chunk hash values plus their respective chunk sizes in a simple linear array. The chunking algorithm is supposed to create variable, but similarly sized chunks from the data stream, and do so in a way that the same data results in the same chunks even if placed at varying offsets. For more information see this blog story.

Decoding: Let's take the chunk index file, and reassemble the large linear data stream by concatenating the uncompressed chunks retrieved from the chunk store, keyed by the listed chunk hash values.

As an extra twist, we introduce a well-defined, reproducible, random-access serialization format for file trees (think: a more modern tar), to permit efficient, stable storage of complete file trees in the system, simply by serializing them and then passing them into the encoding step explained above.

Finally, let's put all this on the network: for each image you want to deliver, generate a chunk index file and place it on an HTTP server. Do the same with the chunk store, and share it between the various index files you intend to deliver.

Why bother with all of this? Streams with similar contents will result in mostly the same chunk files in the chunk store. This means it is very efficient to store many related versions of a data stream in the same chunk store, thus minimizing disk usage. Moreover, when transferring linear data streams chunks already known on the receiving side can be made use of, thus minimizing network traffic.

Why is this different from rsync or OSTree, or similar tools? Well, one major difference between casync and those tools is that we remove file boundaries before chunking things up. This means that small files are lumped together with their siblings and large files are chopped into pieces, which permits us to recognize similarities in files and directories beyond file boundaries, and makes sure our chunk sizes are pretty evenly distributed, without the file boundaries affecting them.

The "chunking" algorithm is based on a the buzhash rolling hash function. SHA256 is used as strong hash function to generate digests of the chunks. xz is used to compress the individual chunks.

Here's a diagram, hopefully explaining a bit how the encoding process works, wasn't it for my crappy drawing skills:

The diagram shows the encoding process from top to bottom. It starts with a block device or a file tree, which is then serialized and chunked up into variable sized blocks. The compressed chunks are then placed in the chunk store, while a chunk index file is written listing the chunk hashes in order. (The original SVG of this graphic may be found here.)

# Details

Note that casync operates on two different layers, depending on the use-case of the user:

1. You may use it on the block layer. In this case the raw block data on disk is taken as-is, read directly from the block device, split into chunks as described above, compressed, stored and delivered.

2. You may use it on the file system layer. In this case, the file tree serialization format mentioned above comes into play: the file tree is serialized depth-first (much like tar would do it) and then split into chunks, compressed, stored and delivered.

The fact that it may be used on both the block and file system layer opens it up for a variety of different use-cases. In the VM and IoT ecosystems shipping images as block-level serializations is more common, while in the container and application world file-system-level serializations are more typically used.

Chunk index files referring to block-layer serializations carry the .caibx suffix, while chunk index files referring to file system serializations carry the .caidx suffix. Note that you may also use casync as direct tar replacement, i.e. without the chunking, just generating the plain linear file tree serialization. Such files carry the .catar suffix. Internally .caibx are identical to .caidx files, the only difference is semantical: .caidx files describe a .catar file, while .caibx files may describe any other blob. Finally, chunk stores are directories carrying the .castr suffix.

# Features

Here are a couple of other features casync has:

1. When downloading a new image you may use casync's --seed= feature: each block device, file, or directory specified is processed using the same chunking logic described above, and is used as preferred source when putting together the downloaded image locally, avoiding network transfer of it. This of course is useful whenever updating an image: simply specify one or more old versions as seed and only download the chunks that truly changed since then. Note that using seeds requires no history relationship between seed and the new image to download. This has major benefits: you can even use it to speed up downloads of relatively foreign and unrelated data. For example, when downloading a container image built using Ubuntu you can use your Fedora host OS tree in /usr as seed, and casync will automatically use whatever it can from that tree, for example timezone and locale data that tends to be identical between distributions. Example: casync extract http://example.com/myimage.caibx --seed=/dev/sda1 /dev/sda2. This will place the block-layer image described by the indicated URL in the /dev/sda2 partition, using the existing /dev/sda1 data as seeding source. An invocation like this could be typically used by IoT systems with an A/B partition setup. Example 2: casync extract http://example.com/mycontainer-v3.caidx --seed=/srv/container-v1 --seed=/srv/container-v2 /src/container-v3, is very similar but operates on the file system layer, and uses two old container versions to seed the new version.

2. When operating on the file system level, the user has fine-grained control on the meta-data included in the serialization. This is relevant since different use-cases tend to require a different set of saved/restored meta-data. For example, when shipping OS images, file access bits/ACLs and ownership matter, while file modification times hurt. When doing personal backups OTOH file ownership matters little but file modification times are important. Moreover different backing file systems support different feature sets, and storing more information than necessary might make it impossible to validate a tree against an image if the meta-data cannot be replayed in full. Due to this, casync provides a set of --with= and --without= parameters that allow fine-grained control of the data stored in the file tree serialization, including the granularity of modification times and more. The precise set of selected meta-data features is also always part of the serialization, so that seeding can work correctly and automatically.

3. casync tries to be as accurate as possible when storing file system meta-data. This means that besides the usual baseline of file meta-data (file ownership and access bits), and more advanced features (extended attributes, ACLs, file capabilities) a number of more exotic data is stored as well, including Linux chattr(1) file attributes, as well as FAT file attributes (you may wonder why the latter? — EFI is FAT, and /efi is part of the comprehensive serialization of any host). In the future I intend to extend this further, for example storing btrfs sub-volume information where available. Note that as described above every single type of meta-data may be turned off and on individually, hence if you don't need FAT file bits (and I figure it's pretty likely you don't), then they won't be stored.

4. The user creating .caidx or .caibx files may control the desired average chunk length (before compression) freely, using the --chunk-size= parameter. Smaller chunks increase the number of generated files in the chunk store and increase HTTP GET load on the server, but also ensure that sharing between similar images is improved, as identical patterns in the images stored are more likely to be recognized. By default casync will use a 64K average chunk size. Tweaking this can be particularly useful when adapting the system to specific CDNs, or when delivering compressed disk images such as squashfs (see below).

5. Emphasis is placed on making all invocations reproducible, well-defined and strictly deterministic. As mentioned above this is a requirement to reach the intended security guarantees, but is also useful for many other use-cases. For example, the casync digest command may be used to calculate a hash value identifying a specific directory in all desired detail (use --with= and --without to pick the desired detail). Moreover the casync mtree command may be used to generate a BSD mtree(5) compatible manifest of a directory tree, .caidx or .catar file.

6. The file system serialization format is nicely composable. By this I mean that the serialization of a file tree is the concatenation of the serializations of all files and file sub-trees located at the top of the tree, with zero meta-data references from any of these serializations into the others. This property is essential to ensure maximum reuse of chunks when similar trees are serialized.

7. When extracting file trees or disk image files, casync will automatically create reflinks from any specified seeds if the underlying file system supports it (such as btrfs, ocfs, and future xfs). After all, instead of copying the desired data from the seed, we can just tell the file system to link up the relevant blocks. This works both when extracting .caidx and .caibx files — the latter of course only when the extracted disk image is placed in a regular raw image file on disk, rather than directly on a plain block device, as plain block devices do not know the concept of reflinks.

8. Optionally, when extracting file trees, casync can create traditional UNIX hard-links for identical files in specified seeds (--hardlink=yes). This works on all UNIX file systems, and can save substantial amounts of disk space. However, this only works for very specific use-cases where disk images are considered read-only after extraction, as any changes made to one tree will propagate to all other trees sharing the same hard-linked files, as that's the nature of hard-links. In this mode, casync exposes OSTree-like behavior, which is built heavily around read-only hard-link trees.

9. casync tries to be smart when choosing what to include in file system images. Implicitly, file systems such as procfs and sysfs are excluded from serialization, as they expose API objects, not real files. Moreover, the "nodump" (+d) chattr(1) flag is honored by default, permitting users to mark files to exclude from serialization.

10. When creating and extracting file trees casync may apply an automatic or explicit UID/GID shift. This is particularly useful when transferring container image for use with Linux user name-spacing.

11. In addition to local operation, casync currently supports HTTP, HTTPS, FTP and ssh natively for downloading chunk index files and chunks (the ssh mode requires installing casync on the remote host, though, but an sftp mode not requiring that should be easy to add). When creating index files or chunks, only ssh is supported as remote back-end.

12. When operating on block-layer images, you may expose locally or remotely stored images as local block devices. Example: casync mkdev http://example.com/myimage.caibx exposes the disk image described by the indicated URL as local block device in /dev, which you then may use the usual block device tools on, such as mount or fdisk (only read-only though). Chunks are downloaded on access with high priority, and at low priority when idle in the background. Note that in this mode, casync also plays a role similar to "dm-verity", as all blocks are validated against the strong digests in the chunk index file before passing them on to the kernel's block layer. This feature is implemented though Linux' NBD kernel facility.

13. Similar, when operating on file-system-layer images, you may mount locally or remotely stored images as regular file systems. Example: casync mount http://example.com/mytree.caidx /srv/mytree mounts the file tree image described by the indicated URL as a local directory /srv/mytree. This feature is implemented though Linux' FUSE kernel facility. Note that special care is taken that the images exposed this way can be packed up again with casync make and are guaranteed to return the bit-by-bit exact same serialization again that it was mounted from. No data is lost or changed while passing things through FUSE (OK, strictly speaking this is a lie, we do lose ACLs, but that's hopefully just a temporary gap to be fixed soon).

14. In IoT A/B fixed size partition setups the file systems placed in the two partitions are usually much shorter than the partition size, in order to keep some room for later, larger updates. casync is able to analyze the super-block of a number of common file systems in order to determine the actual size of a file system stored on a block device, so that writing a file system to such a partition and reading it back again will result in reproducible data. Moreover this speeds up the seeding process, as there's little point in seeding the white-space after the file system within the partition.

# Example Command Lines

Here's how to use casync, explained with a few examples:

$casync make foobar.caidx /some/directory  This will create a chunk index file foobar.caidx in the local directory, and populate the chunk store directory default.castr located next to it with the chunks of the serialization (you can change the name for the store directory with --store= if you like). This command operates on the file-system level. A similar command operating on the block level: $ casync make foobar.caibx /dev/sda1


This command creates a chunk index file foobar.caibx in the local directory describing the current contents of the /dev/sda1 block device, and populates default.castr in the same way as above. Note that you may as well read a raw disk image from a file instead of a block device:

$casync make foobar.caibx myimage.raw  To reconstruct the original file tree from the .caidx file and the chunk store of the first command, use: $ casync extract foobar.caidx /some/other/directory


And similar for the block-layer version:

$casync extract foobar.caibx /dev/sdb1  or, to extract the block-layer version into a raw disk image: $ casync extract foobar.caibx myotherimage.raw


The above are the most basic commands, operating on local data only. Now let's make this more interesting, and reference remote resources:

$casync extract http://example.com/images/foobar.caidx /some/other/directory  This extracts the specified .caidx onto a local directory. This of course assumes that foobar.caidx was uploaded to the HTTP server in the first place, along with the chunk store. You can use any command you like to accomplish that, for example scp or rsync. Alternatively, you can let casync do this directly when generating the chunk index: $ casync make ssh.example.com:images/foobar.caidx /some/directory


This will use ssh to connect to the ssh.example.com server, and then places the .caidx file and the chunks on it. Note that this mode of operation is "smart": this scheme will only upload chunks currently missing on the server side, and not re-transmit what already is available.

Note that you can always configure the precise path or URL of the chunk store via the --store= option. If you do not do that, then the store path is automatically derived from the path or URL: the last component of the path or URL is replaced by default.castr.

Of course, when extracting .caidx or .caibx files from remote sources, using a local seed is advisable:

$casync extract http://example.com/images/foobar.caidx --seed=/some/exising/directory /some/other/directory  Or on the block layer: $ casync extract http://example.com/images/foobar.caibx --seed=/dev/sda1 /dev/sdb2


When creating chunk indexes on the file system layer casync will by default store meta-data as accurately as possible. Let's create a chunk index with reduced meta-data:

$casync make foobar.caidx --with=sec-time --with=symlinks --with=read-only /some/dir  This command will create a chunk index for a file tree serialization that has three features above the absolute baseline supported: 1s granularity time-stamps, symbolic links and a single read-only bit. In this mode, all the other meta-data bits are not stored, including nanosecond time-stamps, full UNIX permission bits, file ownership or even ACLs or extended attributes. Now let's make a .caidx file available locally as a mounted file system, without extracting it: $ casync mount http://example.comf/images/foobar.caidx /mnt/foobar


And similar, let's make a .caibx file available locally as a block device:

$casync mkdev http://example.comf/images/foobar.caibx  This will create a block device in /dev and print the used device node path to STDOUT. As mentioned, casync is big about reproducibility. Let's make use of that to calculate the a digest identifying a very specific version of a file tree: $ casync digest .


This digest will include all meta-data bits casync and the underlying file system know about. Usually, to make this useful you want to configure exactly what meta-data to include:

$casync digest --with=unix .  This makes use of the --with=unix shortcut for selecting meta-data fields. Specifying --with-unix= selects all meta-data that traditional UNIX file systems support. It is a shortcut for writing out: --with=16bit-uids --with=permissions --with=sec-time --with=symlinks --with=device-nodes --with=fifos --with=sockets. Note that when calculating digests or creating chunk indexes you may also use the negative --without= option to remove specific features but start from the most precise: $ casync digest --without=flag-immutable


This generates a digest with the most accurate meta-data, but leaves one feature out: chattr(1)'s immutable (+i) file flag.

To list the contents of a .caidx file use a command like the following:

$casync list http://example.com/images/foobar.caidx  or $ casync mtree http://example.com/images/foobar.caidx


The former command will generate a brief list of files and directories, not too different from tar t or ls -al in its output. The latter command will generate a BSD mtree(5) compatible manifest. Note that casync actually stores substantially more file meta-data than mtree files can express, though.

# What casync isn't

1. casync is not an attempt to minimize serialization and downloaded deltas to the extreme. Instead, the tool is supposed to find a good middle ground, that is good on traffic and disk space, but not at the price of convenience or requiring explicit revision control. If you care about updates that are absolutely minimal, there are binary delta systems around that might be an option for you, such as Google's Courgette.

2. casync is not a replacement for rsync, or git or zsync or anything like that. They have very different use-cases and semantics. For example, rsync permits you to directly synchronize two file trees remotely. casync just cannot do that, and it is unlikely it every will.

# Where next?

casync is supposed to be a generic synchronization tool. Its primary focus for now is delivery of OS images, but I'd like to make it useful for a couple other use-cases, too. Specifically:

1. To make the tool useful for backups, encryption is missing. I have pretty concrete plans how to add that. When implemented, the tool might become an alternative to restic, BorgBackup or tarsnap.

2. Right now, if you want to deploy casync in real-life, you still need to validate the downloaded .caidx or .caibx file yourself, for example with some gpg signature. It is my intention to integrate with gpg in a minimal way so that signing and verifying chunk index files is done automatically.

3. In the longer run, I'd like to build an automatic synchronizer for $HOME between systems from this. Each $HOME instance would be stored automatically in regular intervals in the cloud using casync, and conflicts would be resolved locally.

4. casync is written in a shared library style, but it is not yet built as one. Specifically this means that almost all of casync's functionality is supposed to be available as C API soon, and applications can process casync files on every level. It is my intention to make this library useful enough so that it will be easy to write a module for GNOME's gvfs subsystem in order to make remote or local .caidx files directly available to applications (as an alternative to casync mount). In fact the idea is to make this all flexible enough that even the remoting back-ends can be replaced easily, for example to replace casync's default HTTP/HTTPS back-ends built on CURL with GNOME's own HTTP implementation, in order to share cookies, certificates, … There's also an alternative method to integrate with casync in place already: simply invoke casync as a sub-process. casync will inform you about a certain set of state changes using a mechanism compatible with sd_notify(3). In future it will also propagate progress data this way and more.

5. I intend to a add a new seeding back-end that sources chunks from the local network. After downloading the new .caidx file off the Internet casync would then search for the listed chunks on the local network first before retrieving them from the Internet. This should speed things up on all installations that have multiple similar systems deployed in the same network.

Further plans are listed tersely in the TODO file.

# FAQ:

1. Is this a systemd project?casync is hosted under the github systemd umbrella, and the projects share the same coding style. However, the code-bases are distinct and without interdependencies, and casync works fine both on systemd systems and systems without it.

2. Is casync portable? — At the moment: no. I only run Linux and that's what I code for. That said, I am open to accepting portability patches (unlike for systemd, which doesn't really make sense on non-Linux systems), as long as they don't interfere too much with the way casync works. Specifically this means that I am not too enthusiastic about merging portability patches for OSes lacking the openat(2) family of APIs.

3. Does casync require reflink-capable file systems to work, such as btrfs? — No it doesn't. The reflink magic in casync is employed when the file system permits it, and it's good to have it, but it's not a requirement, and casync will implicitly fall back to copying when it isn't available. Note that casync supports a number of file system features on a variety of file systems that aren't available everywhere, for example FAT's system/hidden file flags or xfs's projinherit file flag.

4. Is casync stable? — I just tagged the first, initial release. While I have been working on it since quite some time and it is quite featureful, this is the first time I advertise it publicly, and it hence received very little testing outside of its own test suite. I am also not fully ready to commit to the stability of the current serialization or chunk index format. I don't see any breakages coming for it though. casync is pretty light on documentation right now, and does not even have a man page. I also intend to correct that soon.

5. Are the .caidx/.caibx and .catar file formats open and documented?casync is Open Source, so if you want to know the precise format, have a look at the sources for now. It's definitely my intention to add comprehensive docs for both formats however. Don't forget this is just the initial version right now.

6. casync is just like $SOMEOTHERTOOL! Why are you reinventing the wheel (again)? — Well, because casync isn't "just like" some other tool. I am pretty sure I did my homework, and that there is no tool just like casync right now. The tools coming closest are probably rsync, zsync, tarsnap, restic, but they are quite different beasts each. 7. Why did you invent your own serialization format for file trees? Why don't you just use tar? — That's a good question, and other systems — most prominently tarsnap — do that. However, as mentioned above tar doesn't enforce reproducibility. It also doesn't really do random access: if you want to access some specific file you need to read every single byte stored before it in the tar archive to find it, which is of course very expensive. The serialization casync implements places a focus on reproducibility, random access, and meta-data control. Much like traditional tar it can still be generated and extracted in a stream fashion though. 8. Does casync save/restore SELinux/SMACK file labels? — At the moment not. That's not because I wouldn't want it to, but simply because I am not a guru of either of these systems, and didn't want to implement something I do not fully grok nor can test. If you look at the sources you'll find that there's already some definitions in place that keep room for them though. I'd be delighted to accept a patch implementing this fully. 9. What about delivering squashfs images? How well does chunking work on compressed serializations? – That's a very good point! Usually, if you apply the a chunking algorithm to a compressed data stream (let's say a tar.gz file), then changing a single bit at the front will propagate into the entire remainder of the file, so that minimal changes will explode into major changes. Thankfully this doesn't apply that strictly to squashfs images, as it provides random access to files and directories and thus breaks up the compression streams in regular intervals to make seeking easy. This fact is beneficial for systems employing chunking, such as casync as this means single bit changes might affect their vicinity but will not explode in an unbounded fashion. In order achieve best results when delivering squashfs images through casync the block sizes of squashfs and the chunks sizes of casync should be matched up (using casync's --chunk-size= option). How precisely to choose both values is left a research subject for the user, for now. 10. What does the name casync mean? – It's a synchronizing tool, hence the -sync suffix, following rsync's naming. It makes use of the content-addressable concept of git hence the ca- prefix. 11. Where can I get this stuff? Is it already packaged? – Check out the sources on GitHub. I just tagged the first version. Martin Pitt has packaged casync for Ubuntu. There is also an ArchLinux package. Zbigniew Jędrzejewski-Szmek has prepared a Fedora RPM that hopefully will soon be included in the distribution. # Should you care? Is this a tool for you? Well, that's up to you really. If you are involved with projects that need to deliver IoT, VM, container, application or OS images, then maybe this is a great tool for you — but other options exist, some of which are linked above. Note that casync is an Open Source project: if it doesn't do exactly what you need, prepare a patch that adds what you need, and we'll consider it. If you are interested in the project and would like to talk about this in person, I'll be presenting casync soon at Kinvolk's Linux Technologies Meetup in Berlin, Germany. You are invited. I also intend to talk about it at All Systems Go!, also in Berlin. ## June 18, 2017 ### GStreamer News #### GStreamer 1.10.5 stable release (binaries) Pre-built binary images of the 1.10.5 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android. The builds are available for download from: Android, iOS, Mac OS X and Windows. ## June 17, 2017 ### KXStudio News #### DPF-Plugins v1.1 released With some minor things finally done and all reported bugs squashed, it's time to tag a new release of DPF-Plugins. The initial 1.0 version was not really advertised/publicized before, as there were still a few things I wanted done first - but they were already usable as-is. The base framework used by these plugins (DPF) will get some deep changes soon, so better to have this release out now. I will not write a changelog here, it was just many small changes here and there for all the plugins since v1.0. Just think of this release as the initial one. :P The source code plus Linux, macOS and Windows binaries can be downloaded at https://github.com/DISTRHO/DPF-Plugins/releases/tag/v1.1. The plugins are released as LADSPA, DSSI, LV2, VST2 and JACK standalone. As this is the first time I show off the plugins like this, let's go through them a little bit... The order shown is more or less the order in which they were made. Note that most plugins here were made/ported as a learning exercise, so not everything is new. Many thanks to António Saraiva for the design of some of these interfaces! ### Mini-Series This is a collection of small but useful plugins, based on the good old LOSER-Dev Plugins. This collection currently includes 3 Band EQ, 3 Band Splitter and Ping Pong Pan. ### MVerb Studio quality, open-source reverb. Its release was intended to provide a practical demonstration of Dattorro’s figure-of-eight reverb structure and provide the open source community with a high quality reverb. This is a DPF'ied build of the original MVerb plugin, allowing a proper Linux version with UI. ### Nekobi Simple single-oscillator synth based on the Roland TB-303. This is a DPF'ied build of the nekobee project, allowing LV2 and VST builds of the plugin, plus a nicer UI with a simple cat animation. ;) ### Kars Simple karplus-strong plucked string synth. This is a DPF'ied build of the karplong DSSI example synth, written by Chris Cannam. It implements the basic Karplus-Strong plucked-string synthesis algorithm (Kevin Karplus & Alex Strong, "Digital Synthesis of Plucked-String and Drum Timbres", Computer Music Journal 1983). ### ndc-Plugs DPF'ied ports of some plugins from Niall Moody. See http://www.niallmoody.com/ndcplugs/plugins.htm for the original author's page. This collection currently includes Amplitude Imposer, Cycle Shifter and Soul Force plugins. ### ProM projectM is an awesome music visualizer. This plugin makes it work as an audio plugin (LV2 and VST). ### glBars This is an OpenGL bars visualization plugin (as seen in XMMS and XBMC/Kodi). Adapted from the jack_glbars project by Nedko Arnaudov. ## June 15, 2017 ### ardour #### Ardour 5.10 released We are pleased to announce the availability of Ardour 5.10. This is primarily a bug-fix release, with several important fixes for recent selection/cut/copy/paste regressions along with fixes for many long standing issues large and small. This release also sees the arrival of VCA slave automation, along with improvements in overall VCA master/slave behaviour. There are also significant extensions to Ardour's OSC support. Read more below for the full list of features, improvements and fixes. read more ### GStreamer News #### GStreamer 1.10.5 stable release The GStreamer team is pleased to announce the fifth bugfix release in the stable 1.10 release series of your favourite cross-platform multimedia framework! This release only contains bugfixes and it should be safe to update from 1.10.0. It is most likely the last release in the stable 1.10 release series See /releases/1.10/ for the full release notes. Binaries for Android, iOS, Mac OS X and Windows will be available shortly. Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx. ## June 14, 2017 ### blog4 #### new sound installation by Tina Madsen opens in Liebig1, Berlin Block 4 artist Tina Mariane Krogh Madsen presents her new sound installation in Berlin in Liebig12 this Thursday, the 15. June: http://www.liebig12.net/15-06-tina-mariane-krogh-madsenbody-resonance-sound-installation/ ## June 10, 2017 ### KXStudio News #### KXStudio 14.04.5 release and future plans Hello there, it's time for another KXStudio ISO release! KXStudio 14.04.5 is here! Lots have changed in the applications and plugins for Linux Audio (even in KXStudio itself), so it was about time to see those ISO images updated. Behind the scenes, from what the user can see, it might appear as nothing has truly changed. After all, this is an updated image still based on Ubuntu 14.04, like those from 2 years ago. But we had a really big amount of releases for our beloved software, enough to deserve this small ISO update. There is no list of changes this time, sorry. The main thing worth mentioning is that base system is exactly the same, with only applications and plugins updated. You know the saying - if ain't broken, don't fix it! Before you ask.. no, there won't be a 16.04 based ISO release. When 2016 started KDE5 was not in a good enough shape, and it would need a lot of work (and time) to port all the changes made for KDE4 into KDE5. KDE5 is a lot better now than it used to be, but we missed the opportunity there. The current plan is to slowly migrate everything we have into KDE5 (meta-packages, scripts, tweaks, artwork, etc) and do a new ISO release in May 2018. (Yes, this means using Ubuntu 18.04 as base) The choice of KDE Plasma as desktop environment is not set in stone, other (lighter) desktops have appeared recently that will be considered. In the end it depends if it will be stable and good enough for audio production. You can download the new ISOs on the KXStudio website, at http://kxstudio.linuxaudio.org/Downloads#LiveDVD. And that's it for now. We hope you enjoy KXStudio, being it the ISO "distribution" release or the repositories. ## June 09, 2017 ### Linux – CDM Create Digital Music #### Ableton have now made it easy for any developer to work with Push 2 You know Ableton Push 2 will work when it’s plugged into a computer and you’re running Ableton Live. You get bi-directional feedback on the lit pads and on the screen. But Ableton have also quietly made it possible for any developer to make Push 2 work – without even requiring drivers – on any software, on virtually any platform. And a new library is the final piece in making that easy. Even if you’re not a developer, that’s big news – because it means that you’ll likely see solutions for using Push 2 with more than just Ableton Live. That not only improves Push as an investment, but ensures that it doesn’t collect dust or turn into a paperweight when you’re using other software – now or down the road. And it could also mean you don’t always need a computer handy. Push 2 uses standards supported on every operating system, so this could mean operation with an iPad or a Raspberry Pi. That’s really what this post-PC thing is all about. The laptop still might be the best bang-for-your-buck equation in the studio, but maybe live you want something in the form of a stompbox, or something that goes on a music stand while you sing or play. If you are a developer, there are two basic pieces. First, there’s the Push Interface Description. This bit tells you how to take control of the hardware’s various interactions. https://github.com/Ableton/push-interface Now, it was already possible to write to the display, but it was a bit of work. Out this week is a simple C++ code library you can bootstrap, with example code to get you up and running. It’s built in JUCE, the tool of choice for a whole lot of developers, mobile and desktop alike. (Thanks, ROLI!) https://github.com/Ableton/push2-display-with-juce Marc Resibois created this example, but credit to Ableton for making this public. Here’s an example of what you can do, with Marc demonstrating on the Raspberry Pi: This kind of openness is still very much unusual in the hardware/software industry. (Novation’s open source Launchpad Pro firmware API is another example; it takes a different angle, in that you’re actually rewriting the interactions on the device. I’ll cover that soon.) But I think this is very much needed. Having hardware/software integration is great. Now it’s time to take the next step and make that interaction more accessible to users. Open ecosystems in music are unique in that they tend to encourage, rather than discourage sales. They increase the value of the gear we buy, and deepen the relationships makers have with users (manufacturers and independent makers alike). And these sorts of APIs also, ironically, force hardware developers to make their own iteration and revision easier. It’s also a great step in a series of steps forward on openness and interoperability from Ableton. Whereas the company started with relatively closed hardware APIs built around proprietary manufacturer relationships, Ableton Link and the Push API and other initiatives are making it easier for Live and Push users to make these tools their own. The post Ableton have now made it easy for any developer to work with Push 2 appeared first on CDM Create Digital Music. ## June 08, 2017 ### Linux – CDM Create Digital Music #### ROLI now make a$299, ultra-compact expressive keyboard

ROLI are filling out their mobile line of controllers, Blocks, with a two-octave keyboard – and that could change a lot. In addition to the wireless Bluetooth, battery-powered light-up X/Y pad and touch shortcuts, now you get something that looks like an instrument. The Seaboard Block is an ultra-mobile, expressive keyboard for your iOS gadget or computer, and it’s available for $299, including in Apple Stores. If you wanted a new-fangled “expressive” keyboard – a controller on which you can move your fingers into and around the keys for extra expression – ROLI already had one strong candidate. The Seaboard RISE is a beautiful, futuristic, slim device with a familiar key layout and a price of US$799. It’ll feel a bit weird playing a piano sound on it if you’re a keyboardist, since the soft, spongy keys will be new to you. But you’ll know where the notes are, and it’ll be responsive. Then, switch to any more unusual sound – synths, physical modeled instruments, and the like – and it becomes simply magical. Finally, you have a new physical interface for your new, unheard sounds.

For me, the RISE was already a sweet spot. But I’ll be honest, I can still imagine holding back because of the price. And it doesn’t fit in my backpack, or my easyJet-friendly rollaway.

Size and price matter. So the Seaboard Block, if it feels good, could really be the winner. And even if you passed up that X/Y pad and touch controller, you might take a second look at this one. (Plus, it makes those Blocks make way more sense.)

We’ll get one in to test when they ship later this month. But ROLI also promise a touch and feel similar to the RISE (if not quite as deep, since the Block is slimmer). I found the previous Blocks to be responsive, but not as expressive as the RISE – so that’s good news.

What you get is a two-octave keyboard in a small-but-playable minikey form factor, USB-C for charging and MIDI out, and connectors for snap-and-play use with other Blocks.

For those of you not familiar, the Seaboard line also include what ROLI somewhat confusingly call “5D Touch.” (“Help! I’m trapped in a tesseract and wound up in a wormhole to an evil dimension and now there’s a version of me with an agonizer telling me to pledge allegiance to the Terran Empire!”)

What this means in practical terms is, you can push your fingers into the keys and make something happen, or slide them up and down the surface of the keys and make something happen, or wiggle and bend between notes, or run your finger along a continuous touch strip below the keys and get glissandi. And that turns out to be really, really useful. Also, I can’t overstate this enough – if you have even basic keyboard skills, having a piano-style layout is enormously intuitive. (By the same token, the Linnstrument seems to make sense to people used to frets.)

Add an iPhone or iPad running iOS 9 or later, and you instantly can turn this into an instrument – no wires required. The free Noise app gives you tons of sounds to start with. That means this is probably the smallest, most satisfying jam-on-the-go instrument I can imagine – something you could fit into a purse, let alone a backpack, and use in a hotel room or on a bus without so much as a wire or power connection. (With ten hours battery life, I’m fairly certain the Seaboard Block will run out of battery later than my iPhone does).

Regular CDM readers probably will want it to do more than that for three hundred bucks. So, you do get compatibility with various other tools. Ableton Live, FXpansion Strobe2, Native Instruments Kontakt and Massive, Bitwig Studio, Apple Logic Pro (including the amazing Sculpture), Garageband, SampleModeling SWAM, and the crazy-rich Spectrasonics Omnisphere all work out of the box.

You can also develop your own tools with a rich open SDK and API. That includes some beautiful tools for Max/MSP. Not a Max owner? There’s even a free 3-month license included. (Dedicated tools for integrating the Seaboard Block are coming soon.)

The SDK actually to me makes this worth the investment – and worth the wait to see what people come up with. I’ll have a full story on the SDK soon, as I think this summer is the perfect time for it.

The Touch block, which previously seemed a bit superfluous, also now looks useful, as it gives you additional hands-on control of how the keyboard responds. That X/Y pad makes a nice combo, too. But my guess is, for most of us, you may drop those and just use the keyboard – and of course modularity allows you to do that.

ROLI aren’t without competition (somewhat amazingly, given these devices were once limited to experimental one-offs). The forthcoming JOUE, from the creator of the JazzMutant Lemur, is an inbound Kickstarter-backed product. And I have to say, it’s truly extraordinary – the touch sensitivity and precision is unmatched on the market. But there isn’t an obvious controller template or app combo to begin with, so it’s more a specialist device. The ROLI instrument works out of the box with an app, and will be in physical Apple Stores. And the ROLI has a specific, fixed playing style the JOUE doesn’t quite match. My guess is the two will be complementary, and there’s even reason for JOUE lovers to root for ROLI – because ROLI are developing the SDK, tools, instrument integration, and user base that could help other devices to succeed. (Think JOUE, Linnstrument, Madrona Labs Soundplane, not to mention the additions to the MIDI spec.)

Anyway, this is all big news – and coming on the heels of news of Ableton’s acquisition of Max/MSP, this week may prove a historical one. What was once the fringe experimentation of the academic community is making a real concerted entry into the musical mainstream. Now the only remaining question, and it’s a major one, is whether the weirdo stuff catches on. Well, you have a hand in that, too – weirdos, assemble!

https://roli.com/products/blocks/seaboard-block

The post ROLI now make a $299, ultra-compact expressive keyboard appeared first on CDM Create Digital Music. #### Arturia AudioFuse: all the connections, none of the hidden settings After a long wait, Arturia’s AudioFuse interface has arrived. And on paper, at least, it’s like audio interface wish fulfillment. What do you want in an interface? You want really reliable, low-latency audio. You want all the connections you need. (Emphasis on what you need, because that’s tricky – not everyone needs the same thing.) And you want to be able to access the settings without having to dive through menus or load an application. That last one has often been a sticking point. Even when you do find an interface with the right connections and solid driver reliability and performance, a lot of the time the stuff you change every day is buried in some hard-to-access menus, or even more likely, on some application you have to load on your computer and futz around with. And oh yeah — it’s €/$599. That’s aggressively competitive when you read the specs.

I requested one of these for review when I met with Arturia at Musikmesse in Frankfurt some weeks ago, so this isn’t a review – that’s coming. But here are some important specs.

### Connections

Basically, you get everything you need as a solo musician/producer – 4 outs (so you can do front/rear sound live, for instance), 4 ins, plus phono pre’s for turntables, two mic pres (not just one, as some boxes annoyingly have), and MIDI.

Plus, there’s direct monitoring, separate master / monitor mix channels (which is great for click tracks, cueing for DJs or live, and anything that requires a separate monitor mix, as well as tracking), and a lot of sync and digital options.

It’s funny, this is definitely on my must-have list, but it’s hard to find a box that does this without getting an expansive (and expensive) interface that may have more I/O than one person really needs.

This is enough for pretty much all the tracking applications one or two people recording will need, plus the monitoring options you need for various live, DJ, and studio needs, and A/B monitor switching you need in the studio. It also means as a soloist, you can eliminate a lot of gear – also important when you’re on the go.

Their full specs:

2 DiscretePRO microphone preamps
2 RIAA phono preamps
2x Mic/Instrument/Line (XLR / 1/4″ TRS)
2x Phono/Line (RCA / 1/4″ TRS)
4 analog outputs (1/4″ TRS)
S/PDIF in/out
Word clock in/out
MIDI in/out
24-bit next-generation A-D/D-A converters at up to 192kHz sampling rate
Talkback with dedicated built-in microphone (up to 96 kHz Sample Rate)
A/B speaker switching
Direct monitoring
Separate master and monitor mix channels
USB interface with PC, Mac, iOS, Android and Linux compatibility
3-port USB hub
3 models: Classic Silver, Space Grey, Deep Black
Aluminum chassis, hard leather-covered top cover

Arturia also promise high-end audio performance, to the tune of “dual state-of-the-art mic preamps with a class-leading >131dB A-weighted EIN rating.” I’ll try to test that with some people who are better engineers than I am when we get one in.

Also cute – a 3-port USB hub. So this could really cut down the amount of gear I pack.

Now, my only real gripe is, while USB improves compatibility, I’d love a Thunderbolt 3/USB-C version of this interface, especially as that becomes the norm on Mac and PC. Maybe that will come in the future; it’s not hard to imagine Arturia making two offerings if this box is a success. USB remains the lowest common denominator, and this is not a whole lot of simultaneous I/O, so USB makes some sense. (Thunderbolt should theoretically offer stable lower latency performance by allowing smaller buffer sizes.)

### And dedicated controls

This is a big one. You’ll read a lot of the above on specs, but then discover that audio interfaces make you launch a clumsy app on your PC or Mac and/or dive into menus to get into settings.

That’s doubly annoying in studio use where you don’t want to break flow. How many times have you been in the middle of a session and lost time and concentration because some setting somewhere wasn’t set the way you intended, and you couldn’t see it? (“Hey, why isn’t this recording?” “Why is this level wrong?” “Why can’t I hear anything?” “Ugh, where’s the setting on this app?” … are … things you may hear if you’re near me in a studio, sometimes peppered with less-than-family-friendly bonus words.)

So Arturia have made an interface that has loads of dedicated controls. Maybe it doesn’t have a sleek, scifi minimalist aesthetic as a result, but … who cares?

Onboard dedicated controls that don’t require menu diving include: talking mic, dedicated input controls, A/B monitor switching, and a dedicated level knob for headphones.

### And OS compatibility

This is the other thing – there are some great interfaces that lack support for Linux and mobile. So, for instance, if you want to rig up a custom Raspberry Pi for live use or something like that, this can double as the interface. Or you can use it with Android and iOS, which with increasingly powerful tablets starts to look viable, especially for mobile recording or stage use.

Arturia tell us performance, depending on your system, should be reliably in the territory of 4.5ms – well within what you’re likely to need, even for live (and you can still monitor direct). Some tests indicate performance as low as 3.5ms.

### Plus a nice case and cover

Here’s an idea that’s obviously a long time coming. The AudioFuse not only has an adorable small form factor and aluminum chassis, but there’s a cover for it. So no more damage and scratches or even breaking off knobs when you tote this thing around – that to me is an oddly huge “why doesn’t everyone do this” moment.

The lid has a doubly useful feature – it disables the controls when it’s on, so you can avoid bumping something onstage.

Dimensions:
69*126*126 mm.

Weight:
950 g

I’m very eager to get this in my hands. Stay tuned.

The post Arturia AudioFuse: all the connections, none of the hidden settings appeared first on CDM Create Digital Music.

## June 07, 2017

### MOD Devices Blog

#### MOD travels around the world – Part 2

Last year, Gianfranco wrote a post about the international events MOD Devices has attended and because there’s been a lot of activity recently and a lot more to come in the near future, we’re doing a Part Deux, with all the latest events recaps and news. Enjoy!

Ok, so we’re a music technology startup and these are three of the greatest words you can say whenever someone asks you “- and what do YOU do?” at an event. But we’re also part of the free/libre/open source software community, which is what makes us a bit of an exotic fish in certain environments. Yet this is what gives us our edge and the ability to try to change the game and provide a creative platform that empowers its users.

In every event we go, we’re constantly pitching and demonstrating the Duo (and, as of April, its new peripherals) to everyone we meet, and it’s interesting to see that each event has its own specificity, each crowd its expectations, each musician his or her own particular needs. As we have these conversations, we get some wonderful feedback, broaden the community and make some friends in the process. It’s both exhausting and really fascinating!

## Musikmesse 2017

Last April, we went back to Frankfurt and took part in the Musikmesse again. This time, we weren’t accompanied by the musical mastermind who thought of a world without musical instruments, but we had a great team composed of Pjotr, Jesse, Gian and myself. We were located in the electric guitar and relied on our beautiful Pedalboard Builder interface to lure the attendants to our booth. Also, Pjotr and Jesse’s trumpet and Circuit MOD jams were bound to get us some attention. At one point, they caught the eye of a French podcast crew and I ended up being interviewed for the great Les Sondiers channel (you can check it out below).

We made friends all around us but a special nod must go to luthier Jean-Luc Moscato and bass virtuoso Jeff Corallini who were right next to us. With his 7-string bass, he was always impressing everyone who walked by. Someone filmed a nice impromptu jam that happened at some point. Our own Pjotr Lasschuit got some trumpet action there as well:

With music booming everywhere, we were happy to explore some of the other (quieter) halls and check out the latest gear. I was particularly impressed by this super versatile MIDI wind instrument.

All in all, we got another great feeling of our place in this impressive and innovative industry and, like during NAMM earlier this year, we took another step forward in gathering momentum, creating some buzz and starting collaborations.

## LAC 2017

The Linux Audio Conference has been THE community event for us since our first time there in 2013. This year, it was held in Saint Etienne, co-organized by the GRAME from Lyon and the CIEREC from Saint Etienne’s Jean Monnet University. It’s always a great opportunity to meet, chat and have a drink or two with our community’s developers, enthusiasts and supporters.

This year, we held a workshop on the “Origins, features and roadmap of the MOD Duo” and were really thrilled with the dialogue it sparked.

There was also a very insightful keynote speech by Paul Davis, developer of JACK and Ardour among some other great achievements. He presented his view on the state of Linux Audio, open-source development in general and he even mentioned MOD Devices as an example of an open-source-based company striving to get proper marketing promotion (indeed we are!). I was also super excited about the music tutor developed by Marc Groenewegen from the Utrecht School of Music and Technology. We talked a little bit after his session and along with Robin Gareus we imagined how we could soon have a music tutor plugin for the Duo. You can check out these (and others’) talks on the Youtube channel of Université Jean Monnet here.

The evenings were filled with musical performances and our own Jeremy Jongepier, AKA AutoStatic, closed off the second night with a MOD-fueled concert. He totally owned the stage with his Duo, guitar and MIDI controllers, all the while downing a nice cold beer: very RocknRoll! The video for that is here and starts at around 2:40:00.

## Upcoming events

From attending these events we’ve come to realize that we’re really reconciling these two aspects – the investor-friendly and the idealistic FLOSS developer -, which isn’t always easy, but they’re actually two sides of the same coin. We’re looking to take the best from both worlds: bring some much needed investment and new business model to the FLOSS world and provide evolving and innovative devices based on FLOSS to the music market.

The next events we’ll attend are a perfect place to continue to position ourselves as a company with a different outlook and mindset on the musical effects game.

### Sónar+D MarketLab

Next week, we will be in Barcelona for a very exciting event. It will be our second participation at the Sónar+D after being selected as a finalist for the 2015 Startup Competition. We will have a booth at the MarketLab this time, which is, as the organizers put it, “a space where the creators of the year’s most outstanding technology initiatives present the projects that they have developed in creative labs, media labs, universities and businesses. A place for trying out innovations that explore new forms of creation, production and marketing, and which in turn fosters relationships between professionals in the creative industries and the general public”. Who knows, maybe Björk will come and test the Duo out…

### Les Ardentes Start-up Garden

In early July, we are headed to Liège, in Belgium, to be one of 30 startup at the Living Lab of the Wallifornia MusicTech that will be held during the Les Ardentes Music Festival. This will be another great opportunity to show the Duo to a broad audience, from musicians to investors. Good music and great conversations on the horizon, what more could we ask for?

That’s it for now, but there’ll be more on the next semester, for sure! And if any of you will be around in Spain or Belgium for our next two rendezvous, we’d love to see you, so drop us a line

## May 30, 2017

### blog4

#### Notstandskomitee concert video

The Notstandskomitee concert at Fraction Bruit #17, Loophole Berlin, 27.5.2017 with tracks from the new album The Golden Times:

## May 26, 2017

### blog4

#### The Golden Times are here

Block 4 released the new album by Notstandskomitee: The Golden Times

The Golden Times by Notstandskomitee

## May 23, 2017

### blog4

#### new Notstandskomitee album and Berlin concert

Block4 set the release date of the new Notstandskomitee album The Golden Times album to this Friday, the 26.May 2017! It will be released exclusively on Bandcamp on the usual address https://notstandskomitee.bandcamp.com
On Saturday the 27.May 2017 Malte Steiner will play a new Notstandskomitee set with new realtime visuals at the final Fraction Bruit event in Loophole, Berlin.

The Golden Times Are About To Come

#### Body Interfaces: 10.1.1 100 Continue

the performance Body Interfaces: 10.1.1 100 Continue of my better half Tina Mariane Krogh Madsen at Sofia Underground Performance Art Festival 28.4.2017

## May 19, 2017

### Libre Music Production - Articles, Tutorials and News

#### Paul Davis, Ardour and JACK creator/developer, talks at Linux Audio Conference 2017

The Linux Audio Conference 2017 is under way and this year, Ardour and JACK creator/developer, Paul Davis talked about Linux audio and his thoughts on where things currently stand with his presentation "20 years of open source audio: Success, Failure and the In-between".

#### LSP plugins 1.0.24 released

Vladimir Sadovnikov has just released version 1.0.24 of his audio plugin suite, LSP plugins. All LSP plugins are available in LADSPA, LV2, LinuxVST and standalone JACK formats.

#### Ardour 5.9 is released

Ardour 5.9 has recently been released with new features, including many improvements and fixes.

This release includes -

#### Drumgizmo 0.9.14 is released

The Drumgizmo team have officially announced version 0.9.14  of their drum sampling plugin.

DrumGizmo is an open source, multichannel, multilayered, cross-platform drum plugin and stand-alone application. It enables you to compose drums in MIDI and mix them with a multichannel approach. It is comparable to that of mixing a real drumkit that has been recorded with a multimic setup.

## May 15, 2017

### ardour

#### Ardour 5.9 released

Ardour 5.9 is now available, representing several months of development that spans some new features and many improvements and fixes.

Among other things, some significant optimizations were made to redraw performance on OS X/macOS that may be apparent if you are using Ardour on that platform. There were further improvements to tempo and MIDI related features and lots of small improvements to state serialization. Support for the Presonus Faderport 8 control surface was added (see the manual for some quite thorough documentation).

As usual, there are also dozens or hundreds of other fixes based on continuing feedback from wonderful Ardour users worldwide.

Read more below for the full list of features, improvements and fixes.

## May 10, 2017

### rncbc.org

#### Qtractor 0.8.2 - A Stickier Tauon release

And now for something ultimately pretty much expected: the Qstuff* pre-LAC2017 release frenzy wrap up!

Qtractor 0.8.2 (a stickier tauon) is released!

Change-log:

• Track-name uniqueness is now being enforced, by adding an auto-incremental number suffix whenever necessary.
• Attempt to raise an internal transient file-name registry to prevent automation/curve files to proliferate across several session load/save (re)cycles.
• Track-height resizing now meets immediate visual feedback.
• A brand new user preference global option is now available: View/Options.../Plugins/Editor/Select plug-in's editor (GUI) if more than one is available.
• More gradient eye-candy on main track-view and piano-roll canvases, now showing left and right edge fake-shadows.
• Fixed the time entry spin-boxes when changing time offset or length fields in BBT time format that goes across any tempo/time-signature change nodes.
• French (fr) translation update (by Olivier Humbert, thanks).

Description:

Qtractor is an audio/MIDI multi-track sequencer application written in C++ with the Qt framework. Target platform is Linux, where the Jack Audio Connection Kit (JACK) for audio and the Advanced Linux Sound Architecture (ALSA) for MIDI are the main infrastructures to evolve as a fairly-featured Linux desktop audio workstation GUI, specially dedicated to the personal home-studio.

Website:

http://qtractor.org
http://qtractor.sourceforge.net

Project page:

http://sourceforge.net/projects/qtractor

http://sourceforge.net/projects/qtractor/files

Git repos:

http://git.code.sf.net/p/qtractor/code
https://github.com/rncbc/qtractor.git
https://gitlab.com/rncbc/qtractor.git
https://bitbucket.org/rncbc/qtractor.git

Wiki (help still wanted!):

http://sourceforge.net/p/qtractor/wiki/

Qtractor is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Enjoy && Have fun, always.

## May 06, 2017

### Libre Music Production - Articles, Tutorials and News

#### ZARAZA releases new album entirely recorded with Libre Music tools

Ecuadorian / Canadian experimental veterans ZARAZA have just released their 3rd album Spasms of Rebirth.

It was entirely recorded using Libre Music tools:

• Fedora 25
• Ardour (all mixing)
• Guitarix (all guitars and bass)
• Hydrogen (drums)
• Calf plugins (for mixing in Ardour)
• Audacity (mastering)

#### DrumGizmo version 0.9.13 now available

DrumGizmo is an open source, multichannel, multilayered, cross-platform drum plugin and stand-alone application. It enables you to compose drums in midi and mix them with a multichannel approach. It is comparable to that of mixing a real drumkit that has been recorded with a multimic setup.

Included in this release is:

#### New Drumgizmo version released with major new feature, diskstreaming!

Version 0.9.13 of drum sampling plugin, Drumgizmo has recently been released with the much anticipated diskstreaming feature.

DrumGizmo is an open source, multichannel, multilayered, cross-platform drum plugin and stand-alone application. It enables you to compose drums in MIDI and mix them with a multichannel approach. It is comparable to that of mixing a real drumkit that has been recorded with a multimic setup.

### MOD Devices Blog

#### Tutorial: Arduino & Control Chain

Hi there once again fellow MOD-monsters! As some of you might know, we are currently in the beta testing phase for our new Control Chain footswitch extension. At the same time, we have also released the brand new Arduino Control Chain shield, allowing you to build your own awesome controllers.

If you’re thinking, hey Jesse, what is all that Control Chain talk about?

*Control Chain is an open standard, including hardware, communication protocol, cables and connectors, developed to connect external controllers to the MOD. For example, footswitch extensions, expression pedals and so on.
Comparing to MIDI, Control Chain is way more powerful. For example, instead of using hard-coded values as MIDI does, Control Chain has what is called device descriptor and its assignment (or mapping) message contains the full information about the parameter being assigned, such as parameter name, absolute value, range and any other data. Having all that information on the device side allows developers to create powerful peripherals that can, for example, show the absolute parameter value on a display, use different LED colors to indicate a specific state, etc. Pretty neat, right?

Until now, you could find two examples, for a simple momentary button and potentiometer, on our GitHub page, but today we will add a new example: we will build a Control Chain device with expression pedal inputs.

# What do I need?

1. One Arduino Uno or Due
2. One Arduino Control Chain shield
3. One stereo (TRS) jack for every expression pedal input that you want (Max: 4 (Uno), 8(Due))
4. A soldering iron, some wire and some soldering tin
5. (Optional) Something to put your final build in

# The schematic

Because the Arduino has very high impedance analog inputs, there is no need for any current limiting resistor. We can simply hook up the TRS jacks as follows: (Tip to 5V, ring to signal and sleeve to ground)*

(*) not all expression pedals are made equal, some manufacturers use a different mapping than the one described above.
Another common mapping is: Tip to signal, ring to 5V, sleeve to ground. (For example on the Roland EV-5)

# The code

The Arduino code is quite simple, it reads the ADC values using the analogRead() function, and stores it into a variable. The Control Chain library takes care of the rest.

The code is written in such a way that you can change the define at the top of the code to the amount of ports that you want, and not have to rewrite any code. Do you want 3 expression pedal ports?

#define amountOfPorts 3

The maximum amount of ports for an Arduino Uno is 4. The Arduino Due can provide a maximum of 8 ports.

# The build

1. Solder wires to your TRS jack inputs
2. Twist the wires together
3. Solder the sleeves to the ground strip on the CC shield
4. Solder the tips to the 5v strip on the CC shield
5. Solder the rings to the corresponding analog inputs on the CC shield

Attach the CC shield to the Arduino, now your device should look a little like this:

1. Follow the instructions on our Github Page and install the dependencies
2. Change the define in the code to the amount of ports connected
4. Time for a test drive!
1. Connect the MOD Duo to the “main” Control Chain port on your new device

2. Connect your expression pedals and try them out with your MOD Duo!
5. (Optional) Create an enclosure for (semi-)permanent installation, I used an old smartphone-box that I had laying around somewhere

# The end result

You just built your own Control Chain device, and we hope with many more to come. We are looking forward to seeing what all you wonderful people come up with! Don’t hesitate to come and talk to us on the forums if you have any questions about Control Chain devices, the Arduino shield or our favourite musicians.

Talk to you later!

P.S. Vulfpeck is great

### GStreamer News

#### GStreamer 1.12.0 stable release (binaries)

Pre-built binary images of the 1.12.0 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

## May 04, 2017

### GStreamer News

#### GStreamer 1.12.0 stable release

The GStreamer team is pleased to announce the first release in the stable 1.12 release series. The 1.12 release series is adding new features on top of the 1.0, 1.2, 1.4, 1.6, 1.8 and 1.10 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework.

Full release notes can be found here.

Binaries for Android, iOS, Mac OS X and Windows will be provided in the next days.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

## May 02, 2017

### digital audio hacks – Hackaday

#### Robotic Glockenspiel and Hacked HDD’s Make Music

[bd594] likes to make strange objects. This time it’s a robotic glockenspiel and hacked HDD‘s. [bd594] is no stranger to Hackaday either, as we have featured many of his past projects before including the useless candle or recreating the song Funky town from Old Junk.

His latest project is quite exciting. He has incorporated his robotic glockenspiel with a hacked hard drive rhythm section to play audio controlled via a PIC 16F84A microcontroller. The song choice is Axel-F. If you had a cell phone around the early 2000’s you were almost guaranteed to have used this song as a ringtone at some point or another. This is where music is headed these days anyway; the sooner we can replace the likes of Justin Bieber with a robot the better. Or maybe we already have?

Filed under: digital audio hacks, robots hacks