October 06, 2015

Create Digital Music » Linux

Bitwig Studio is a real touch DAW you can use today

We’ve seen apps made exclusively for touch devices like the iPad. And we’ve seen very basic touch support in desktop apps. But Bitwig Studio 1.3 is both.

So, on the same day we find out about a proper touch laptop, we also get a DAW that’s ready, today, to take advantage of it.

Also, is Bitwig actually trolling Mac fans, or Apple? Because Bitwig is touting the fact that OS X will at least get its new “E-Cowbell device.” (I’m not making this up.)

For multi-touch devices on Windows and Linux (yes, Linux) – plus a specially-optimized profile for Microsoft Surface Pro and Surface Book – Bitwig has a lot of new touch features. They aren’t just responding to touch events; they’re going further.



Full multi-touch support. This is, of course, essential. It doesn’t work on OS X – there literally isn’t a model for processing the events – but it does open up some possibilities even on Linux.

Here’s what that looks like when mixing:

Radial menu and gestures. To try to make touch more useful, Bitwig are also adding a shortcut menu, for quick gestural access to settings for devices, drums, clips, arrangement, notes, and tracks. I really have no idea whether I’m convinced by this without having used it, but I’m intrigued. It also represents a different approach than Ableton’s, which has been to focus on moving control to physical hardware (Push). Clearly, there’s an argument for each approach – there’s something different about getting away from a display and using something tactile – but it’s nice to see something happening with the touch/display end of the equation.

Looking at this at first, it looked like a separate remote-control layer. In practice, though, that “radial menu” is maybe better thought of as a heads-up reference to what gestures do. The result can be really fast gestural editing, as seen here in arrangement:

I’m really keen to try this, especially as arranging with a mouse is painful. (It’s even worse when working with two people, as my studio colleague Nerk can attest.)

You can play right on the interface. Rather than go to a separate iPad remote (as Apple does with its own Logic and GarageBand), Bitwig are building a keyboard right into the tool so you can play directly. It’s like having a hardware controller or an iPad app built into your display.

There’s a built-in drum editor. There’s a pad layout for playing drum pads, as well, plus some touch editing options.

More on 1.3:

Thavius Beck, who is actually featured in press shots from Microsoft, already showed what this might look like on a prototype back in July. I’ll be trying to track down Thavius and having a chat with my Berlin neighbors at Bitwig soon.

When can you use this? Right now. (And that means if you do have a big multi-touch monitor, that works, too!)

The post Bitwig Studio is a real touch DAW you can use today appeared first on Create Digital Music.

by Peter Kirn at October 06, 2015 08:44 PM

Hackaday » digital audio hacks

Electronics for Aliens

We are surrounded by displays with “millions” of colors and hundreds of pixels per inch. With super “high fidelity” sound producing what we perceive to be realistic replicas of the real world.

Of course this is not the case, we rarely stop and think how our electronic systems have been crafted around the limitations of human perception. So to explore this issue, in this article we ask the question: “What might an alien think of human technology?”. We will assume a lifeform which senses the world around it much as we do. But has massively improved sensing abilities. In light of these abilities we will dub it the Oculako.

Let’s begin with the now mostly defunct CRT display and see what our hypothetical alien thinks of it. The video below shows a TV screen shot at 10,000 frames per second.

Our limited visual system detects changes slowly (PDF). Human persistence of vision takes effect at around 15 frames a second. This merges the lines together, creating a single image. The Oculako processes images far faster. And so sees what looks like a line racing down the screen. Such an organism might not even recognize this as a display.

Even if the Oculako can work its way around this slow update rate it still has odd clusters of Red Green and Blue dots to contend with. Humans experience the world through these three overlapping regions across the 400 to 700nm range of the electromagnetic spectrum.

Human_spectral_sensitivity_smallHuman color sensitivity [Source: Norman Koren]In the electronics world, have only developed sources which produce light at distinct wavelengths. And so we mimic the operation of the human eye, mixing light at these three wavelengths to fool our eyes into believing it is seeing a single color.

This is far from common to all animals. Dogs can only see Green and Blue. Some butterflies see from ultra-violet through to Red, but with a host of additional receptors (5 across the spectrum) giving them an improved ability to distinguish colors.

All these pale in comparison to the Mantis shrimp. The Mantis shrimp has better definition in ultra-violet than we have across the entire spectrum. With a total of 12 receptor types, it has the best vision of any known animal (though it may not use that information well).

But lets suppose the Oculako is far better than this, its receptors are spectrographic. Sensing wavelengths across the spectrum with sub-nanometer accuracy. Its high resolution eyes easily able to pick out the individual pixels in our displays all it sees is a curious ever changing mosaic of color with no discernible meaning.

LCD, LED, and DLP Display Tech

lcd-screens-under-a-microscope1Common LCDs under a microscope, as the Oculako sees them. [Source: ExtremeTech]LCD and LED displays are a little better, at least the Oculako sees a complete image but the mosaic structure persists, while screen refreshes race down the screen, merging into one another.

If the Oculako were to come across a modern DLP projector it might get quite a shock. Unlike LCDs which mix Red, Green and Blue spatially, DLPs mix colors temporally. There are some great videos explaining the operation of the micro mirrors at the heart of DLP projectors. But put simply a DLP projector is composed of an array of thousands of micro mirrors which reflect light onto or away from a surface to produce an image.

dmdA SEM image of micro mirrors from a DLP projector [Source: Ben Krasnow]As the micro mirrors are either “on” or “off” other techniques are required to produce color and intensity variation. In order to produce brighter or darker pixels the mirrors use pulse width modulation (PDF). By flicking the mirrors on and off rapidly our eyes are fooled into thinking we’ve seen a brighter or darker image. In order to generate different colors the projector filters the light through a rapidly spinning color wheel. This produces a quick succession of Red, Green and Blue images which our slow responding eye temporally mixes to produce a seemly continuous color image.

To produce a realistic image using this process the micro mirrors flick back and forth thousands of times a second. The Oculako advanced eyes picks out each flick with ease.

But more than all this, the Oculako sees no in reason in the flatness of these ever changing mosaics. Like the so called Lytro, “light field” camera the Oculakos eyes capture both the intensity and direction of the light entering its eyes. This allows it to reconstruct a 3D representation of the world around it, far more accurately than our stereoscopic vision, much better than images produced by our fledgling 3D TVs and VR headsets.

What’s That Noise?

While our displays might be incomprehensible, you might think our sound reproduction is surely better? Unfortunately this is not the case. The best human ears are limited to sounds of 20KHz and below. This is blown away by what other animals can hear. Some species are able to perceive sound ranging into the 100s of Kilohertz. The sounds produced by our speakers therefore sound low and dull to the Oculako. Natural sounds like babbling water, would also be unrecognizable, clipped as they are by our audio systems. With its advanced hearing, perhaps the Oculako even transmits complex data by sound.

spectrographdataImage data decoded from a spectrograph.

Our world is likely to be a confusing place for the Oculako. It’s easy to fall into the trap of thinking that other organisms, terrestrial or extra, could view our user interfaces even if they didn’t understand them. But this little survey of the visual and audio technologies we’ve developed (and the great work done by hackers to elucidate their construction) show they are very narrowly confined to our particular set of senses.

Talking to Aliens

voyagerrecordThe Voyager Golden Record

But what of our actual attempts to communicate with alien life? The most famous of which is perhaps the Voyager Golden Record.

A fascinating artifact in itself the voyager record is similar to a normal long player record fabricated out of gold. On one side it is etched with a graphic designed to provide instruction on the operation of the record and how pits and grooves are used to store information on the disc.

As well as audio recordings which might teach aliens to speak, the record also encodes color image data. Inevitably it likely suffers from the issues described here. An enhanced sensory system (like the Mantis Shrimp), does not imply higher intelligence or the ability to easily interpret complex messages, and so the data may remain incomprehensible.

Nonetheless, it’s a very difficult problem to come up with an interspecies communication mechanism. Especially considering that we don’t know of any other sentient life-forms, what their senses might be, and we were heavily constrained on how the communication was delivered. Given the technological advances since the 1970s how would you design this era’s golden record?

Filed under: digital audio hacks, Featured, slider, video hacks

by Nava Whiteford at October 06, 2015 02:01 PM

October 04, 2015

Two steps further

For my little synth module project I created the following systemd unit file /etc/systemd/system/zynaddsubfx.service that starts up a ZynAddSubFX proces at boot time:


ExecStop=/usr/bin/killall zynaddsubfx


/usr/local/bin/zynaddsubfx-mpk is a simple script that starts ZynAddSubFX and connects my Akai MPK:


zynaddsubfx -r 48000 -b 64 -I alsa -O alsa -P 7777 -L /usr/share/zynaddsubfx/banks/SynthPiano/0040-BinaryPiano2.xiz &

while ! aconnect 'MPK mini' 'ZynAddSubFX'
  sleep 0.1


/usr/local/bin/ in its turn is a small Python script that shows a message and a red LED on a 16×2 LCD display so that I know the synth module is ready to use:

# Example using a character LCD plate.
import math
import time

import Adafruit_CharLCD as LCD

# Initialize the LCD using the pins
lcd = LCD.Adafruit_CharLCDPlate()

# Show some basic colors.
lcd.set_color(1.0, 0.0, 0.0)
lcd.message('Raspberry Pi 2\nZynAddSubFX')

The LCD is not an Adafruit one though but a cheaper version I found on Dealextreme. It works fine though with the Adafruit LCD Python library. Next step is to figure out if I can use the buttons on the LCD board to change banks and presets.

synth_module_lcd_startupRaspberry Pi synth module with 16×2 LCD display synth_module_test_setupThe synth module test environment

The post Two steps further appeared first on

by jeremy at October 04, 2015 09:19 PM

October 03, 2015

Recent changes to blog

Introducing the Quintar: Part 2 of 3 discussion

Hereby as short lowend example. I recorded straight from the input, no effects what so ever.
While playing I moved my righthand from 7,5cm (3") left from the pickup towards the bridge.


by Broomy at October 03, 2015 02:22 PM

Introducing the Quintar: Part 2b of 3

Recently discussed the building of my Quintar.
This time I'll zoom in on the pickup of the Quintar.
conventional low Z pickup.

Every electric instrument need some kind of pickup to convert the vibration of the strings into an electric signal.
There are several ways to achieve this but using a magnetic core with a electric (copper) wire wound around it (making it a coil) is by far the most commonly used type of pickup.
The electric magnetic pickup was developed in a time when a high input signal was necessary.
This high signal (approx. around 1 V, sometimes even higher) is obtained by winding the copperwire several thousands of times around the magnetic core.

So far so good I hear you say, but there are three important drawbacks:
1. Because of the many windings, the pickup needs a relatively large cavity, which weakens the guitar structurally and needs to be compensated, so a minimal guitar body design has it's limitations.
2. The signal of a conventional pickup is unbalanced, which means that it is prone to electric magnetic radiation caused by electric apparatuses.
3. With a higher number of windings, the pickup is less able to pick up higher frequencies (without getting to technically about why that is, but the so called high impedance of the pickup has to do with it).

The solution is pretty simple: lowering the amount of windings will solve drawback number 1 and 2 and when the number of windings is brought back to the point that the output signal is around 20 mV, then
the signal can be fed to a microphone preamp, and a microphone preamp is balanced, which solves drawback number 2.
This "kind" of pickup is called a low impedance pickup or a low Z pickup.
But here a new drawback arises: A regular guitar amplifier doesn't have a microphone input, so you have to use either a preamp, a line matching transformer or set the amplifier aside and start using a soundcard or mixer with a microphone preamp. This is where Guitarix gets in, because with a good usb-soundcard and a laptop some excellent (bass)guitar sounds can be created!

Back to the pickup that I built.
In essence it is no more then a couple of rectangular strong magnets that I glued in between two pieces of cardboard, then I wound 0,1mm coilwire times around this core.
I soldered the ends of the wire to two small nails, which I pressed through one piece of the cardboard, I soldered two small shielded wires to the nails and connected them at a XLR-connector using pin number twee and three.
Finally I dropped some waterbased hobby glue on the coilwire to stop the wire from moving. this is called potting the pick up. A pickup which isn't potted has a tendency to be microphonic.
The resistance of the pickup is around 300 Ohm which is perfect for a microphone preamp.
Check this and this link for more info.
I also made a short soundclip without any effects and a short video with some.

There is another low z alternative and that is creating a single loop around a magnetic coil. This of course will yield a very tiny signal, but by using a transformer you can "amplify" the signal to a workable level. The sound of this pickup is similar to the one described above. One advantage of this pickup that a prototype can be build in less then 15 minutes.
There is a lengthy but very informative thread here.
I've made a video with a soundclip using this kind of pickup.

and this.

by Broomy at October 03, 2015 01:00 PM

October 01, 2015

Linux Audio Announcements -

[LAA] 2nd Annual Web Audio Conference - submission deadline extended to October 15

The 2nd Web Audio Conference (WAC) will be held April 4-6, 2016 at Georgia Tech in Atlanta. The keynote speakers for WAC 2016 are Helen Thorington and Frank Melchior. Submissions for papers, talks, posters, demos, performances, and artworks are due October 15, 2015 at 11:59 PM Pacific Time. To submit your work, visit

WAC is an international conference dedicated to web audio technologies and applications. The conference welcomes web developers, music technologists, computer musicians, application designers, researchers, and people involved in web standards. The conference addresses research, development, design, and standards concerned with emerging audio-related web technologies such as Web Audio API, Web RTC, WebSockets and Javascript. It is open to industry engineers, R&D scientists, academic researchers, artists, and students. The first Web Audio Conference was held in January 2015 at IRCAM and Mozilla in Paris, France.

The Internet has become much more than a simple storage and delivery network for audio files, as modern web browsers on desktop and mobile devices bring new user experiences and interaction opportunities. New and emerging web technologies and standards now allow applications to create and manipulate sound in real-time at near-native speeds, enabling the creation of a new generation of web-based applications that mimic the capabilities of desktop software while leveraging unique opportunities afforded by the web in areas such as social collaboration, user experience, cloud computing, and portability. The Web Audio Conference focuses on innovative work by artists, researchers, and engineers in industry and academia, highlighting new standards, tools, APIs, and practices as well as innovative web audio applications for musical performance, education, research, collaboration, and production.

Contributions to the second edition of the Web Audio Conference are encouraged in the following areas:

Web Audio API, Web MIDI, Web RTC, and other existing or emerging web standards for audio and music
Development tools, practices, and strategies of web audio applications
Innovative audio and music based web applications
Client-side audio processing (real-time or non real-time)
Audio data and metadata formats and network delivery
Server-side audio processing and client access
Client-side audio engine and audio rendering
Frameworks for audio synthesis, processing, and transformation
Web-based audio visualization and/or sonification
Multimedia integration
Web-based live coding environments for music
Web standards and use of standards within audio based web projects
Hardware and tangible interfaces in web applications
Codecs and standards for remote audio transmission
Any other innovative work related to web audio that does not fall into the above categories

We welcome submissions in the following tracks: paper, poster, demo, performance, and artwork. All submissions will be single-blind peer reviewed. The conference proceedings, which will include both papers (for papers and posters) and abstracts (for demos, performances, and artworks), will be published online in SmartTech, Georgia Tech?s archival open-access repository.

Papers: Submit a 4-6 page paper to be given as an oral presentation.

Talks: Submit an abstract to be given as an oral presentation.

Posters: Submit a 2-4 page paper to be presented at a poster session.

Demos: Submit an abstract to be presented at a hands-on demo session. Demo submissions should include a title, a one-paragraph abstract and a complete list of technical requirements (including anything expected to be provided by the conference organizers).

Performances: Submit a performance making creative use of web-based audio applications. Performances can include elements such as audience device participation, web-based interfaces, WebMIDI, WebSockets, and/or other imaginative approaches to web technology. Submissions must include a title, a one-paragraph abstract of the performance, a link to video documentation of the work, a complete list of technical requirements (including anything expected to be provided by conference organizers), and names and one-paragraph biographies of all musicians involved in the performance.

Artworks: Submit a sonic web artwork or interactive application which makes significant use of web audio standards such as Web Audio API or WebMIDI in conjunction with other technologies such as HTML5 graphics, WebGL, and/or interactivity. Works must be suitable for presentation on a computer kiosk with headphones. They will be featured at the conference venue throughout the conference and on the conference web site. Submissions must include a title, one-paragraph abstract of the work, a link to access the work, and names and one-paragraph biographies of the author(s).

Tutorials: If you are interested in running a tutorial session at the conference, please contact the organizers directly (webaudio at gatech dot edu).

Important Dates
October 15, 2015 at 11:59 PM Pacific Time: submission deadline

December 1, 2015: author notification

March 1, 2016: camera-ready papers and abstracts due

April 4-6, 2016: conference

At least one author of each accepted submission must register for and attend the conference in order to present their work.

Submission Templates and Submission System

Submission templates are available on the conference web site at

The submission system is open at

by at October 01, 2015 11:44 AM

September 30, 2015

Recent changes to blog

Introducing the Quintar: Part 2 of 3 discussion

Wow, this is so cool Hans, this realy spread the range. Cool.

by brummer at September 30, 2015 05:13 PM

Introducing the Quintar: Part 2 of 3 discussion

If you want to tune it as a low B then I would use a bit longer stringlength.
I used the only available round wound (which you will need when playing fretless) low B string of D'addario.
I tuned it a have step up, which would yield a higher tension when I use a common 860mm stringlength. This is roughly compensated with the shorter stringlength of 800mm.
check out this video:

for a low end example. I used a single loop pickup here (I will discuss this alternative later on), but this sounds the same as my current pickup.
In time I will make a low end example of the Quintar.

by Broomy at September 30, 2015 09:21 AM

Introducing the Quintar: Part 2a of 3

Recently I briefly discussed the building of my Quintar and the pickup I used.
There were some requests for a more detailed description, so here it goes.

Measuring out the Quintar

Bassically the Quintar is made out of a tapered piece of wood.
The two measurements to create the taper are the nutwidth and the bridgewidth.
There are several ways to do so, but I like the following method:
Bridgewidth = 2xfretboardoverhang+(n-1)xDistanceBetweenSideofStringsBridge+SumDiamaterOfStrings

fretboardoverhang = the distance from the side of the outer strings to the side of the fretboard, typically around 3.5mm
n = number of strings
DistanceBetweenSideofStringsBridge = The distance between the sides of two strings. For a bass it is around 15 or 18 mm.
SumDiamaterOfStrings = The sum of the gauges of the strings you want to use. The gauges are generally in inches so you have to multiply the gauge with 25.4mm to get millimeters
(or divide all above measurements with 25.4 if you want to use inches).

Calculating the nutwidth is similar:
Nutwidth = 2xfretboardoverhang+(n-1)xDistanceBetweenSideofStringsNut+SumDiamaterOfStrings

DistanceBetweenSideofStringsNut = DistanceBetweenSideofStringsBridgexFactor
You can choose the Factor to your liking, but generally somewhere between 0.4 and 0.5 would yield good results.

Now you have to add some extra distance to accommodate the strings at the nut and the tunermachines at the bridge.

You can download this script and use that to draw an outline of a Quintar in FreeCad, which will help you with all measurements.


I use 120mmx1500mm slabs of 18mm Birch plywood to build the Quintar. I route a 10mmx10mm groove, in the middle of one of the pieces from one end till 0.75*stringlength, using a handheld router using a 10mm straight routerbit.
This groove will accommodate the Martin style trussrod.
At the end of the groove I drill a 20mm hole so I can access the trussrod when I want to adjust it.
The trussrod is made out of a 10x10mm Aluminium U-profile, with a 5 or 6mm threaded rod with two rings and nyloc nuts.
When you tighten the nuts on either end of the rod, the U-channel will bend, therefor applying counterforce to stabilize the neck.

Lay the piece with the groove, with the groove facing up on a level surface, install the trussrod with the opening downwards and glue the second piece on the first piece.
To keep the two pieces from slipping / skating when the glue is applied, drill four holes on places which will be cut off later and hold the pieces together with screws.
You'll need a lot of clamps to get a good glue-up, also use some scrapwood between the clamp and your plywood or you will damage your plywood.
Let the glue set for at least eight hours.

With all clamps removed you can draw the outline of Quintar.

Now it's time to cut the taper. There're several ways to do this depending on your skill and tools:
1. Use a handsaw and cut the taper to approximate 2mm accurasy and sand or plane (using a sharp and low angled plane) even
2. Use a handsaw and clean up with a handheld router with a straightbit with a following bearing
3. Use a sawtable and a tapering jig
4. Use a handheld circular saw with a rail

When the taper is cut you can round of the neck, again there are several options:
1. Using rasps, files and sandpaper
2. Using a router with a round over bit (I use a round over bit with 18mm diameter)
3. Using a handheld beltsander
4. Using a angle grinder with a sanding disc.

Next step is to cut some grooves to accommodate the strings and make a small piece of wood as a string retainer.
I use what I have for it, some files and a few saws with different thicknesses.

The next step is to create the "headstock" by either make the upper slab a bit shorter (which you have to do before glueing up!)
Stick Bass Front
Or similar to a classical guitar by cutting some holes and saw out the middle part using a jigsaw or jewelers saw and file and sand smooth.
An alternative could be to make a template and use a router. When the cavity is done, you drill the holes for the tunermachines

Least but not least you have to decide what type of pickup you want to use:
1. A single loop Low Z pickup
Integrated single loop pickup
2. A "conventional" low Z pickup.
conventional low Z pickup

For the first you have to drill three holes about 3mm under the surface of the fretboard. The size of the holes depends on the thickness of the wire and magnetes.
For the second pickup you have to route or cut a cavity of about 5mm deep and 40mm wide around 7/8 stringlength distance from the nut.

The Quintar is now ready to be sanded up to 320 grit, make sure you keep the fretboard level.
You can now stain the wood if you like before finishing
There're several ways to get a hard finish, e.g. using superglue or epoxyglue but I prefer using floor lacquer, because it is easy to apply (just brush it on).

Some general remarks:
- There are a lot of ways to make things, so if you may find another way you like better, please do so
- Please feel free to ask any question!

next post I will discuss the building of the pickups.

by Broomy at September 30, 2015 08:53 AM

Introducing the Quintar: Part 2 of 3 discussion

Thanks for the comments, I will gather some stuff and will write a more comprehensive description of the building process and technique.
To be continued...

by Broomy at September 30, 2015 06:10 AM

September 29, 2015

Recent changes to blog

Introducing the Quintar: Part 2 of 3 discussion

The video sounds really nice. I'd like to hear the low end too. Does that C string get flabby with the medium scale length? I'm considering trying a similar build like your upright stick bass. I think I'd prefer to use a full bass scale length. Naturally this would make chords more difficult, but I'm mostly considering this for bass. I'm not sure I could handle 5ths tuning either...

I'd love more detailed build plans as well. A diagram/picture details of making the pickup would help since I haven't ever wound one myself.

Looking forward to more demo videos for sure!

by ssj71 at September 29, 2015 10:26 PM

Introducing the Quintar: Part 2 of 3 discussion

Please tell us more about the pickup, you've build it yourself, please, gime a detailed instruction/schematic.

by brummer at September 29, 2015 08:11 PM

Scores of Beauty

Glyph comparison of alternative music fonts

Abraham Lee has designed a rich collection of alternative LilyPond music fonts. Urs Liska has provided instructions on how to include them in the LilyPond score layout.

Although I have already used the fonts in LilyPond scores, my main use of the fonts is the display in ThePirateFugues software. The following video browses through the design themes for scores:

In this post, we provide a per-glyph graphical comparison of the most common glyphs in modern notation. The fonts that we use in the lineup are

  • Emmentaler,
  • all 13 fonts by Abraham Lee, and
  • Gonville, a font by Simon Tatham.

The list of symbols in the LilyPond font specification is extended on a regular basis. Emmentaler is the reference. Glyphs are added to Emmentaler in unstable releases of LilyPond. The other fonts have to catch up. The fonts by Abraham Lee are compatible with LilyPond 2.18.2. The work on Gonville was discontinued about year ago.

In our visualizations, any missing/unspecified glyphs are apparent from the gaps.
Click on the images to load a high-resolution version.

Selected glyphs with prefix clefs:


Selected glyphs with prefix timesig:


Selected glyphs with prefix noteheads:


Selected glyphs with prefix flags:


Selected glyphs with prefix rests:


Selected glyphs with prefix accidentals:


Selected glyphs with prefix scripts:


More glyphs with prefix scripts:


Selected glyphs with prefix pedal:


Glyphs of mensural notation are omitted because the glyph design is typically identical to the Emmentaler font.

End of post.

by datahaki at September 29, 2015 07:54 AM

September 28, 2015

Working on a stable setup

Next step for the synth module project was to get the Raspberry Pi 2 to run in a stable manner. It seems like I’m getting close or that I’m already there. First I built a new RT kernel based on the 4.1.7 release of the RPi kernel. Therefore I had to checkout an older git commit because the RPi kernel is already at 4.1.8. The 4.1.7-rt8 patchset applied cleanly and the kernel booted right away:

pi@rpi-jessie:~$ uname -a
 Linux rpi-jessie 4.1.7-rt8-v7 #1 SMP PREEMPT RT Sun Sep 27 19:41:20 CEST 2015 armv7l GNU/Linux

After cleaning up my cmdline.txt it seems to run fine without any hiccups so far. My cmdline.txt now looks like this:

dwc_otg.lpm_enable=0 dwc_otg.speed=1 console=ttyAMA0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 rootflags=data=writeback elevator=deadline rootwait

Setting USB speed to Full Speed (so USB1.1) by using dwc_otg.speed=1 is necessary otherwise the audio coming out of my USB DAC sounds distorted.

I’m starting ZynAddSubFX as follows:

zynaddsubfx -r 48000 -b 64 -I alsa -O alsa -P 7777 -L /usr/share/zynaddsubfx/banks/SynthPiano/0040-BinaryPiano2.xiz

With a buffer of 64 frames latency is very low and so far I haven’t run into instruments that cause a lot of xruns with this buffer size. Not even the multi-layered ones from Will Godfrey.

So I guess it’s time for the next step, creating a systemd startup unit so that ZynAddSubFX starts at boot. And it would be nice if USB MIDI devices would get connected automatically. And if you could see somehow which instrument is loaded, an LCD display would be great for this. Also I’d like to have the state of the synth saved, maybe by saving an .xmz file whenever there’s a state change or on regular intervals. And the synth module will need a housing or casing. Well, let’s get the software stuff down first.

The post Working on a stable setup appeared first on

by jeremy at September 28, 2015 08:37 PM

Linux Audio Announcements -

[LAA] New Yoshimi Release :)

We are pleased to announce the release of Yoshimi V1.3.6

Principal features for this release are the introduction of controls from the
command line, covering many setup options, as well as extensive
root/bank/instrument management. Some of these new controls are also available
to MIDI via new NRPNs.

Vector control has been extended so that there are four independent 'features'
that each axis can control,

ALSA audio has had a makeover, and will now work at your sound card's best bit
depth - not just 16 bit (as it used to be).

In the 'examples' directory there is a complete song set, 'OutThere.mid' and
'OutThere.xmz'. Together these produce a fairly complex 12 part tune that makes
Yoshimi work quite hard.

More information on these and other features are in the 'doc' directory.

Yoshimi source code can be obtained either from:

Our mailing list is now:

Will J Godfrey
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.
Linux-audio-announce mailing list

by at September 28, 2015 05:32 PM

September 26, 2015

Recent changes to blog

Introducing the Quintar: Part 2 of 3

Last week I introduced the Quintar and wrote about why and how I came to the all fifths tuning.
This week I'll discuss the technique behind it and the last part will be about the Quintar's perfect match with Guitarix.

As I wrote last week I've build several prototypes. With every new prototype I started to move further away from the electric guitar clichés.
One of the things I noticed that, there are a lot of convictions and conventions in the musical world that are partly based on myths or are simply there because nobody questions them.
Musicians and builders tend to be a bit conservative, in my humble opinion.

Before continuing: Don't get me wrong, sometimes clichés don't become clichés without a reason, but I think it is always good to judge them on their own merits.

Back to the Quintar.
The final step before coming to this design was an in between project which I did when I had two days of spare time (which doesn't happen to often...).
My goal was to build an easy and fast to build instrument with a building time under 15 hours.
Inspired by the simple but beautiful instruments of Ergo Instruments, I build a stick double bass.
After playing with it for a couple months, I got a bit annoyed by having to stand (still) while playing so I ducktaped a strap on it and flung it around my neck.
To my suprise it worked out pretty well and it played like a charm.
The bass was well balanced, due to the fact that the tuning machines where on the bridge side.

Finally I build the current prototype of the Quintar.

Again I wanted to make it as simple as possible and make use of commonly available material (and of course share the way how it's build, so other can do alike).
Therefore I wanted to leave out the truss rod, the frets and keep the electronics to an absolute minimum.

After a little googling I found that there where basses made without reinforcing the neck and that playing with a thick neck could have some ergonomic advantages.
With this in mind I glued up to slabs of 18mm / 3/4" birch plywood and started of.
Although plywood is generally looked down at, I find it a very cool product to work with. Because of the layers plywood is stronger and more stable then a regular piece of wood from the same specie.
Further it gives you a flat surface and square sides to start with. last but not least, you can buy it everywhere for a reasonable price.
I use birch plywood because it's hard and strong but still easy to work with.

I used a stringlength of 800mm, which gives a long enough length for the low C string, but I can still play chords.
The stringspacing is about 9mm at the nut and 18mm at the bridge.
Then I cut the taper and made cavities for the pickup and the tunermachines and rounded of the neck.
Besides drilling some holes for the tunermachines and the xlr connector, the building part is already done.
I've sanded, and dyed the wood before finishing it with a couple of layers of floor lacquer.

The pickup was made out of two pieces of cardboard, a few strong magnets and 800 turns of coilwire (0.1mm thickness). I soldered the wires to an XLR-connector. I use a microphone input to plug my Quintar in.
This simple pickup has three big advantages:
firstly it is relatively thin compared to a regular pickup, so I don't have to take a lot of wood away.
Secondly it has a flat frequency response, which means that it picks up all frequencies.
Last but not least the pickup is balanced.

Next up: Part 3 The Quintar's perfect match with Guitarix

by Broomy at September 26, 2015 07:25 PM

September 25, 2015

GStreamer News

GStreamer Core, Plugins, RTSP Server, Editing Services, Python, Validate 1.6.0 stable release

The GStreamer team is proud to announce a new major feature release in the stable 1.x API series of your favourite cross-platform multimedia framework!

This release has been in the works for more than a year and is packed with new features, bug fixes and other improvements.

See for the full list of changes.

Binaries for Android, iOS, Mac OS X and Windows will be provided separately by the GStreamer project.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, or gst-validate, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, or gst-validate.

September 25, 2015 11:00 PM

September 23, 2015


Notstandskomitee concert video AKK Karlsruhe 18.6.2015

Notstandskomitee live at AKK Karlsruhe 18.6.2015 

Excerpt of the concert by Notstandskomitee, the full length is usually between 30-40 minutes. The concept of Angewandte Elektronik TV Festival is to film the artists separated from audience, augment the video with graphics by a VJ and project it to the audience. Also performing on this evening were Circuitnoise, Blood Vault and Benoit & The Mandelbrots.

For booking Notstandskomitee write to : booking AT tmkm DOT dk

video by Tina Mariane Krogh Madsen

by herrsteiner ( at September 23, 2015 02:18 AM

September 22, 2015

Pid Eins

systemd.conf close to being sold out!

Only 14 tickets still available!

systemd.conf 2015 is close to being sold out, there are only 14 tickets left now. If you haven't bought your ticket yet, now is the time to do it, because otherwise it will be too late and all tickets will be gone!

Why attend? At this conference you'll get to meet everybody who is involved with the systemd project and learn what they are working on, and where the project will go next. You'll hear from major users and projects working with systemd. It's the primary forum where you can make yourself heard and get first hand access to everybody who's working on the future of the core Linux userspace!

To get an idea about the schedule, please consult our preliminary schedule.

In order to register for the conference, please visit the registration page.

We are still looking for sponsors. If you'd like to join the ranks of systemd.conf 2015 sponsors, please have a look at our Becoming a Sponsor page!

For further details about systemd.conf consult the conference website.

by Lennart Poettering at September 22, 2015 10:00 PM

September 21, 2015

Linux Audio Announcements -

[LAA] Summer'15 release frenzy wrap up!

Summer'15 release frenzy is not over yet!... well, it's only over on the
'equinox' anyway...
* QmidiNet - A MIDI Network Gateway via UDP/IP Multicast *
QmidiNet [1] 0.3.0 has been released!
QmidiNet [1] is a MIDI network gateway application that sends and
receives MIDI data (ALSA-MIDI and JACK-MIDI) over the network, using
UDP/IP multicast. Inspired by multimidicast [4] and designed to be
compatible with ipMIDI [5] for Windows.
Project page:
- source tarball:
- source package:
- binary packages:
- System tray icon now blinks on network send/receive activity.
- Prefer Qt5 over Qt4 by default with configure script.
- Complete rewrite of Qt4 vs. Qt5 configure builds.
- Fixed for some strict tests for Qt4 vs. Qt5 configure builds.
* QmidiCtl - A MIDI Remote Controller via UDP/IP Multicast *
QmidiCtl [2] 0.3.0 has been released!
QmidiCtl [2] is a MIDI remote controller application that sends MIDI
data over the network, using UDP/IP multicast. Inspired by multimidicast
[4] and designed to be compatible with ipMIDI [5] for Windows. QmidiCtl
[2] has been primarily designed for the Maemo [6] enabled handheld
devices, namely the Nokia N900 [7] and also being promoted to the Maemo
Package [8] repositories. Nevertheless, QmidiCtl may still be found
effective as a regular desktop application as well.
Project page:
- source tarball:
- source package:
- binary packages:
- Prefer Qt5 over Qt4 by default with configure script.
- Complete rewrite of Qt4 vs. Qt5 configure builds.
- Fixed for some strict tests for Qt4 vs. Qt5 configure builds.
* QXGEdit - A Qt XG Editor *
QXGEdit [3] 0.3.0 has been released!
QXGEdit [3] is a live XG instrument editor, specialized on editing
MIDI System Exclusive files (.syx) for the Yamaha DB50XG [9] and thus
probably a baseline for many other XG devices.
Project page:
- source tarball:
- source package:
- binary packages:
- Single/unique application instance control adapted to Qt5/X11.
- Prefer Qt5 over Qt4 by default with configure script.
- Complete rewrite of Qt4 vs. Qt5 configure builds.
- A new top-level widget window geometry state save and restore
sub-routine is now in effect.
- Added "Keywords" to desktop file; fix passing debian flags on
configure (patches by Jaromír Mikeš, thanks).
- Fixed for some strict tests for Qt4 vs. Qt5 configure builds.
- Added application description as's AppData.
QmidiNet [1], QmidiCtl [2] and QXGEdit [3] are free, open-source
Linux Audio [10] software, distributed under the terms of the GNU
General Public License (GPL) version 2 or later [11].
See also:
[1] QmidiNet, a MIDI Network Gateway via UDP/IP Multicast
[2] QmidiCtl, a MIDI Remote Controller via UDP/IP Multicast
[3] QXGEdit, a Qt XG Editor
[4] multimidicast, ends and receives MIDI from Alsa sequencers over
your network
[5] ipMIDI, MIDI over Ethernet ports - send MIDI over your LAN
[6] Maemo Community
[7] Nokia N900, the first Maemo device which may also be used as a phone
[8] Maemo package overview for QmidiCtl
[9] Yamaha DB50XG, PC Soundcard Daughter Board
[10] Linux Audio consortium of libre software for audio-related work
[11] GNU General Public License (GPL)
Have (lots of) fun, always!
rncbc aka. Rui Nuno Capela
Linux-audio-announce mailing list

by at September 21, 2015 10:13 PM

Summer'15 release frenzy wrap up!


Summer'15 release frenzy is not over yet!... well, it's only over on the equinox anyway...

QmidiNet - A MIDI Network Gateway via UDP/IP Multicast

QmidiNet 0.3.0 has been released!

QmidiNet is a MIDI network gateway application that sends and receives MIDI data (ALSA-MIDI and JACK-MIDI) over the network, using UDP/IP multicast. Inspired by multimidicast and designed to be compatible with ipMIDI for Windows.

Project page:
  • System tray icon now blinks on network send/receive activity.
  • Prefer Qt5 over Qt4 by default with configure script.
  • Complete rewrite of Qt4 vs. Qt5 configure builds.
  • Fixed for some strict tests for Qt4 vs. Qt5 configure builds.

Flattr this

QmidiCtl - A MIDI Remote Controller via UDP/IP Multicast

QmidiCtl 0.3.0 has been released!

QmidiCtl is a MIDI remote controller application that sends MIDI data over the network, using UDP/IP multicast. Inspired by multimidicast ( and designed to be compatible with ipMIDI for Windows ( QmidiCtl has been primarily designed for the Maemo enabled handheld devices, namely the Nokia N900 and also being promoted to the Maemo Package repositories. Nevertheless, QmidiCtl may still be found effective as a regular desktop application as well.

Project page:
  • Prefer Qt5 over Qt4 by default with configure script.
  • Complete rewrite of Qt4 vs. Qt5 configure builds.
  • Fixed for some strict tests for Qt4 vs. Qt5 configure builds.

Flattr this

QXGEdit - A Qt XG Editor

QXGEdit 0.3.0 has been released!

QXGEdit is a live XG instrument editor, specialized on editing MIDI System Exclusive files (.syx) for the Yamaha DB50XG and thus probably a baseline for many other XG devices.

Project page:
  • Single/unique application instance control adapted to Qt5/X11.
  • Prefer Qt5 over Qt4 by default with configure script.
  • Complete rewrite of Qt4 vs. Qt5 configure builds.
  • A new top-level widget window geometry state save and restore sub-routine is now in effect.
  • Added "Keywords" to desktop file; fix passing debian flags on configure (patches by Jaromír Mikeš, thanks).
  • Fixed for some strict tests for Qt4 vs. Qt5 configure builds.
  • Added application description as's AppData.

Flattr this

QmidiNet, QmidiCtl and QXGEdit are free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Have (lots of) fun, always!

Summer'15 signing off.

by rncbc at September 21, 2015 06:20 PM

September 20, 2015

Linux Audio Announcements -

[LAA] mcpdisp-0.0.4 is released

The Mackie Control Protocol display emulator has a new release.

The code has be redone in c++ so that I could take it out of the terminal
age into the GUI age.

If you are using a BCF2000 or two this will add the Mackie scribble strips
and channel LED indicators to your setup.

Source can be downloaded from:

or the latest can be cloned from:
git clone

Len Ovens

Linux-audio-announce mailing list

by at September 20, 2015 03:20 PM

September 19, 2015

Recent changes to blog

Introducing the Quintar: Part 1 of 3

I've build a Quintar ,which is the humble fruit of a musical and technical journey I've made over the past couple of years.

In a nutshell; a Quintar is a fretless instrument tuned in fifths, starting from a low C (a half step above a low B of a five string bass) up to a B (tuned the same as a B of a guitar). Although being a simple instrument, it's very versatile. I can play in the bass register, play melodies, even playing chords is possible. Last but not least: being fretless gives the opportunity to play with different tunings such as just, 24-tet or 31-tet.

I'll introduce the Quintar in three parts:
1. The tuning
2. The Technique
3. The match made in heaven: The Quintar and Guitarix

The Tuning

I picked up a guitar, which we had lying around in the house, when I was nine and started plucking the strings and fretting some notes.
I was sold.
From then on I've been playing up until now.

When I was in my mid teens and started to practice more seriously I noticed that I had to memorize a lot of different chord and scale shapes, although they sounded the same, this as a result of the inconsistent tuning of a guitar.
Secondly I was always fond of the sound of a (double) bass, and regretted that I could not play in the low register. I think it was 1999, when I saw Charlie Hunter playing his eight string guitar at the North Sea Festival.
I was blown away.
From then on I was finding a way to create an extended range instrument with an consistent tuning.

While studying jazz-guitar at the conservatory of Rotterdam I started experimenting with different tunings: all major thirds and all fourths.
This solved my “consistency problem” but I still wasn't able to play in the bass register, without adding an awful lot of strings. Besides: both these tunings sound horrible when played all strings open.

After graduation I've abandoned my experiments for a while, or so to say: “life got in the way”.

About seven years ago I picked the thread back up and bought some wood at the local hardware store and build my first extended range guitar, based on the eight string guitar of Charlie Hunter (tuned: Standard dropped D tuning with a Low A and low E). Technically wise it was a horrible instrument but it set me off for further experimentation.
Later on I build a seven string version with the same tuning omitting the high e. When I finished that guitar, I became interested in building an ergonomic guitar and accidently I stumbled on a site on electric cellos, and then my tuning problem was solved!

Cellos (and most of the other members of the violin family) are tuned in perfect fifths, and this gave me what I wanted:
1. A consistent tuning
2. An extended range (because of the relative wide interval: 2/3 of an octave)
3. A great sound when played all strings open (being in harmony with the second overtone)

Hence the name Quintar: Quint(a) being five in Latin oriented languages and -tar from guitar.

The next version I build was a seven stringed, ergonomic, fanned fret instrument, tuned from low C to f#, which I played a lot on.
After a while I wanted to be able to step away from the regular 12-tet tuning. Further I noticed that I hardly used the f#-string, because it gave a shrill and weak sound compared to the other strings. Finally I felt the need for more space between the strings. This would yield a very wide neck when using seven strings, so I decided to drop the highest string (besides I like it when all the strings are tuned in the key of C, but that is an irrational argument ;-)).

With a little detour I finally build the current version and when I first strung her up it felt like coming home.


Next time: The technique

by Broomy at September 19, 2015 05:50 PM

September 18, 2015

Linux Audio Announcements -

[LAA] MMA 15.09

A stable release, version 15.09, of MMA--Musical MIDI Accompaniment
is available for downloading. In addition to a number of bug fixes
and optimizations, MMA now features:

- Works with Python 2.7 or 3.x
- Number of minor bug fixes
- Added RPITCH for random "mistakes"
- Added FretNoise option for Plectrum tracks
- Other minor enhancements

Please read the file text/CHANGES-15 for a complete list of changes.

MMA is a accompaniment generator -- it creates midi tracks
for a soloist to perform with. User supplied files contain
pattern selections, chords, and MMA directives. For full details
please visit:

If you have any questions or comments, please send
them to:

**** Listen to my FREE CD at ****
Bob van der Poel ** Wynndel, British Columbia, CANADA **

by at September 18, 2015 10:39 PM

GStreamer News

GStreamer Core, Plugins, RTSP Server, Editing Services 1.6.0 release candidate 2 (1.5.91)

The GStreamer team is pleased to announce the second release candidate for the stable 1.6 release series. The 1.6 release series is adding new features on top of the 1.0, 1.2 and 1.4 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework. The final 1.6.0 release is planned in the next few days unless any major bugs are found.

Binaries for Android, iOS, Mac OS X and Windows will be provided separately by the GStreamer project.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, or gst-editing-services, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, or gst-editing-services

Check the release announcement mail for details and the release notes above for a list of changes.

September 18, 2015 08:20 PM

Create Digital Music » open-source

What it means that the MeeBlip synth is open source hardware


The MeeBlip synthesizer project is about to reach five years old. I feel this collaboration with engineer James Grahame has been one of the most important to me and to CDM. We haven’t talked so much about its open source side, though – and it’s time.

In five years, we’ve sold thousands of synths – most of them ready-to-play. The MeeBlip isn’t a board and some bag of parts, and it isn’t a kit. You don’t need a soldering iron; after our very first batch, you don’t even need a screwdriver. The MeeBlip is an instrument you can use right away, just like a lot of other instruments on the market.

But unlike those other instruments, the MeeBlip is open source hardware. Not just the firmware code, but the electronics design that makes it work are all available online and freely-licensed. We became, to my knowledge, the first ready-to-play musical hardware to be available in that form in any significant numbers.

That’s not to brag – we should actually consider whether we’re innovative, or whether we’re just plain crazy. Being end user open source hardware isn’t just unusual in music. It’s still a tough sell in hardware in general.

When we embraced the idea in 2010, we frankly didn’t know whether it would work. Now, I think we can have some new confidence – not just for us, but for anyone interested in the concept. So let’s talk about how open hardware works, why we think it will continue to work for the MeeBlip, and how people interested in making hardware can make it work for them.

There is a definition for open source hardware

The 2010 launch year for MeeBlip also saw the release of the Open Source Hardware Definition and the first big annual summit on the topic. I was lucky to get to know the two women who spear-headed making these things happen – Ayah Bdeir (founder of littleBits) and Alicia Gibb. You can read our interview with them from the time, which covers a lot of history.

The final definition is here:

And in fact, the Open Source Hardware Association has its annual summit tomorrow in Philadelphia. James is heads-down in Calgary, and me in Berlin, so we can’t make it – hope we can see a European satellite event soon:

There were a lot of significant folks contributing to that definition. Creative Commons, littleBits, MakerBot, SparkFun, Wired, Make, Arduino, Adafruit, the MIT Media Lab, NYU ITP, and Parallax are all onboard – and I see a lot of old NYC friends (some of them now more famous, like Bre Pettis and Limor Fried). Like a lot of ideas, it helps to be in a scene; it made a big difference to me to get to know these people and talk to them about it.

What they did in the end was to closely follow a software definition, the Open Source Definition for Open Source Software built by Bruce Perens and the Debian project.


MeeBlip has to do some work to be open source hardware

It’s been great to see the for-sale music technology field get more open. We’ve seen makers publishing schematics, releasing open source firmware, and more. But to be really open hardware, the standards are tougher.

Manufacturers who want to call themselves open source hardware have some work to do. The OSHW definition is a really tough definition, but we have done our best to understand and follow it. You should definitely read the whole definition if you’re interested, but here are the big points:
1. The design is public.
2. The source and documentation are public, and in a way that lets you modify it, using an all open source toolchain.
3. You can learn from that design, modify it, make the hardware yourself, and make and sell your own derivatives.
4. A license guarantees your rights to use the tool, without discriminating against how you use it or what you use it with. (That doesn’t come without obligations to the user, though; see below.)

We meet all those manufacturer obligations with the open source components of the MeeBlip, including the front panel. Enclosures are a separate problem, because you design an enclosure specific to the equipment used to manufacture it – yes, even a 3D printer doesn’t really solve that. (Think of it this way: you can’t make a recipe for cake without specifying what kind of cake.) So our enclosure is proprietary, as it’s specific to our manufacturer, but I’d actually love to see people make and share custom, fully open enclosure designs in the future.

There are two aspects to this. The one you probably know best is the license – for the MeeBlip, that includes the GPL v3 (for code) and Creative Commons BY-SA (for hardware designs and look). But the job of the manufacturer is to provide both the design/documentation and the license.

Think of it like building a public park: you need the actual park first, and then maybe a sign that explains to people how they are allowed to use it. As with that sign, just posting rules isn’t enough to make them magically happen. And as with a park, odds are other park-goers, not the police, will be the ones who are most effective at keeping each other to the rules.


Sharing is generous – but it has obligations, too

“Open source” is not a free-for-all, not an invitation to give away your work – not with software, and not with hardware. It’s a system that works when all the participants understand and act on their obligations.

For most people, this isn’t an issue. The whole point for us is to make the MeeBlip as accessible as possible. We hope you’ll poke around the code, even if you’re not a programmer. We hope you’ll look around the circuits and learn them.

Where your obligations come in are if you want to share something you’ve made.

The first and most important requirement is attribution. If you make something based on the MeeBlip, you have to tell people you’ve done so. And that should be a standard for anything we make, even before we get into licenses or legal obligations – this is what’s ethical. Folk singers will often introduce a song by saying who wrote it, or who taught it to them. In synthesis, we’re very often proud to be connected to those who came before.

The second obligation is to contribute to the open source process. This means that if you share something you’ve made with others, you need to make sure the license goes along with it. That way, derivative products give people the same freedoms the original does.

The licenses actually require you to do this, too. We use “copyleft” licenses for our code and our designs. This means that any derivative works have to have the same license. It doesn’t mean you can’t combine the MeeBlip with proprietary tools – the open source hardware definition actually says you’re free to use whatever you like! But if you make a new synth based on the MeeBlip, you need to share what you’ve changed. An easy way to do this is to simply “fork” the GitHub repository, as that also lets people see your changes versus the original, and makes it easy to link between versions.

We know a lot of this can be complicated. So, the easiest thing to do if you’re thinking of making something is simply to get in touch. We’d really enjoy the chance to talk to you about it, and we can probably help you through what might otherwise be a tricky process.

We will certainly enforce these rules. That doesn’t mean stopping anyone from making hardware – on the contrary, we want to help people make any derivatives correctly.

We recently encountered a synth builder who had made a copy of the MeeBlip anode hardware; the internal electronics had only minor modifications and the firmware and use were identical. In this case, we did point out that James’ engineering work wasn’t attributed, and we made ourselves available to help that builder follow the rules and follow these licensing requirements. That builder seems to have decided not to pursue that project, but we’re still available to them and anyone else who wants to do this. We are literally volunteering our time to help you do it, so it’s the very opposite of trying to stop anyone from modifying or producing derivatives of the MeeBlip.


How are we doin’?

I’m proud of the first five years of MeeBlip, but we’re only getting started exploring its open aspect. What we have seen is some immediate advantages to open source synthesizer hardware.

People are learning from the project. We’ve had many MeeBlip customers poke around in the code and schematics. We’ve been able to use those to answer questions, for the more technically minded. And people have used this exhaustive documentation to make some of their own projects.

People do fabricate their own synths. There are markets where we simply can’t afford to sell the MeeBlip. In those corners of the world, it can be cheaper and more efficient for people to make their own. Because the MeeBlip uses all standard parts and nothing unusual or proprietary, they’re free to do that, and a handful have. And meanwhile, in the rest of the world, we can usually provide a better value proposition than the DIY method – so this freedom doesn’t put us out of business.

Open source is peace of mind. In an age when so much is relegated to sales cycles and doomed to wind up in landfills, having open source hardware means you know a product becomes obsolete far less easily.

Openness can lead to modifications. We’ve even seen some firmware suggestions from users. We’ve people build their own, very often amazing, enclosures. Just having schematics available makes this easier.

But look beyond the box. Now, there’s a whole lot more to do. Giving musicians the freedom to modify their instruments is more than just providing documentation and licensing. They have to have the know-how to do this.

This has probably been our biggest failing, but also our greatest opportunity. The next stage is really applying that openness as a way of helping people learn more about electronics, code, and synthesis. Now that we’re smarter about the product side, I hope our next five years are more about the experience side – from the end user just learning to make sounds for the first time to those delving deeper into engineering and invention.

And don’t be afraid. Fear has I think been the greatest obstacle to open source hardware. It’s clearly not the right paradigm for every project. On the other hand, I think fears about clones and theft may overestimate the dangers – at least when it comes to music.

Ultimately, what allows an open project to be effective is a respect for sharing and originality. And that’s where I think the music community has something special. Provided we keep our brand clear, I’ve been struck by how willing musicians are to invest in buying direct from the maker, and recognizing designs that are original.

The reality is, no one is stopping clones with or without special licenses. Even many mid-sized manufacturers can’t afford intellectual property litigation; most can’t afford patent registration in the first place, which these days is often a vanity project.

But what we can do is build a community of people who care about music, about musical instrument design, and about sharing what they do. Those are the people who will value originality. They’re the ones who challenge us makers to be better.

The history of electronic musical instruments is rooted in sharing. Theremin’s designs inspired Bob Moog. How-to-build-your-own-Theremin articles inspired future synth builders – and engineers in many other fields, not just music. Learning from a filter design or a sound routing architecture became a 20th century analog to details of woodworking and drum heads in acoustic instruments from years before. Sharing how we make musical instruments is part of what makes culture.

You can get an anode right now. The limited edition white MeeBlip anode is still available – but there are only about 50 left.

Get yours from us direct:
Get MeeBlip anode Limited Edition

For a limited time, shipping is free (US/Canada) or reduced (US$9.95 worldwide with tracking info – customs may apply).

The post What it means that the MeeBlip synth is open source hardware appeared first on Create Digital Music.

by Peter Kirn at September 18, 2015 05:42 PM

Linux Audio Announcements -

[LAA] Rivendell v2.11.0

On behalf of the entire Rivendell development team, I'm pleased to announce the availability of Rivendell v2.11.0. Rivendell is a full-featured radio automation system targeted for use in professional broadcast environments. It is available under the GNU General Public License.

>From the NEWS file:
*** snip snip ***
New RDCatch Up/Download Protocols. Added support for 'sftp' and 'scp'
protocols .

MP4/AAC File Importation. Added support for importing MP4/AAC audio
files. (See the 'INSTALL' file for details regarding additional
libraries required to activate).

New Switcher Support. Added support for the Ross NK series of video
switcher via the Ross SCP/A module. See the 'SWITCHERS'txt' file for

PCM24 Support. Added support for using PCM24 in the core audio library.

CD Ripper. Refactored CD ripper code to provide faster and more
reliable operation.

Various other bug fixes. See the ChangeLog for details.

Database Update:
This version of Rivendell uses database schema version 245, and will
automatically upgrade any earlier versions. To see the current schema
version prior to upgrade, see RDAdmin->SystemInfo.

As always, be sure to run RDAdmin immediately after upgrading to allow
any necessary changes to the database schema to be applied.
*** snip snip ***

Further information, screenshots and download links are available at: <">>


| Frederick F. Gleason, Jr. | Chief Developer |
| | Paravel Systems |
| Focus on the dream, not the competition. |
| -- Nemesis Racing Team motto |

Linux-audio-announce mailing list

by at September 18, 2015 12:34 PM

[LAA] ANN: pd-l2ork version 20150917 now available

Apologies for x-posting,
Following over dozen minor releases, yesterday pd-l2ork team has
unveiled our latest major release, version 20150917. Release highlights
Release highlights:
*Expanded K12 module
*Pd-L2Ork can now coexist with other releases without any package conflicts
*Drawing optimizations
*New convenience functions, like comments with endlines and labels with
*Comprehensive Raspberry PI GPIO with PWM and I2S/MCP3008 (for analog
ins) support
*New version of L2Ork-centric libcwiid library fork offering support for
all versions of Nintendo-branded wiimotes, including the new MotionPlus
Inside, as well as the support for interleaved passthrough mode (e.g.
MotionPlus + Nunchuk)
*Code refactoring
*Bunch of minor and aesthetic fixes
*Last release (barring any major bugs) prior to the next major release
featuring node-webkit GUI (node-webkit version is currently in alpha
stage of development)
For a changelog and a more detailed overview, please visit:
To download pd-l2ork:
NB: Currently only Ubuntu 14.04 64bit build is available, with 32bit and
Raspberry Pi builds forthcoming.
About Pd-L2Ork
Pd-L2Ork is a fork of the ubiquitous Pure-Data focusing on improved user
interface, expanded collection of externals, and an advanced SVG-enabled
graphical front-end. Originally it was introduced as the core
infrastructure for the Linux Laptop Orchestra (L2Ork, and has since expanded to include K-12
learning module with a unique learning environment offering adaptable
granularity that has been utilized in over dozen maker workshops and
initiatives, including the Raspberry Pi Orchestra program for middle
school children introduced in the summer 2014. Today, pd-l2ork is being
developed by a growing number of international collaborators and
For additional info L2Ork and pd-l2ork:
More about the founding author:
Ivica Ico Bukvic, D.M.A.
Associate Professor
Computer Music
ICAT Senior Fellow
Director -- DISIS, L2Ork
Virginia Tech
School of Performing Arts – 0141
Blacksburg, VA 24061
(540) 231-6139
Linux-audio-announce mailing list

by at September 18, 2015 05:39 AM

September 17, 2015

Hackaday » digital audio hacks

The Ubiquitous Atari Punk Console

The Atari Punk Console (APC) is a dual 555 (or single 556) based synth. Designed by the famous (and somewhat infamous) Forrest Mims in 1980 and originally simply named “Sound Synthesizer”, the circuit gained it’s more recent popularity when re-dubbed the “Atari Punk Console” by Kaustic Machines. The circuit however doesn’t bear much relation to the Atari 2600 which didn’t contain a 555 timer chip. However we assume the 2600 produced a similarly glitchy square wave audio output.

a2The circuits operation is easy to grasp and uses only 9 components. This ease of design and construction has allowed builders to focus more on the aesthetics of its construction, hacking it into interesting, and often unlikely enclosures and systems. One such hack is the “Atari Punk Bucket” (shown here) where the APC along with a simple amp was hacked into an old rusted bucket. The APC was built up on strip-board along with a simple amp and reclaimed speakers. [Farmer glitch] has used this as a prop in live sets and it both looks and sounds awesome.

Electro-music forum user [dnny] air-wired the circuit and replaced the pots with light dependent resistors, before cramming the whole thing into a light-bulb.

light_apc creepy

It’s also been hacked into a Game Boy, turned into a “contact theremin” (where the variable resistors are replaced with a human), embedded in guitar pedals, and been jammed into the disturbing Atari Punk Creepy Doll.

As the basic circuit is so simple, it also makes a  great building block for sequencer projects, many of which are based around the “baby 10” sequencer design. These builds add a new dimension to the APC, and sound pretty great.

Somehow 35 years after its original publication the APCs weird retro sound continues to inspire. Why not try building an APC yourself and find your own weird enclosure to cram it into?

The circuit

The APC is a relatively simple application of the 555 and is a great beginners build. 555s are also ubiquitous and inexpensive parts (less than 10 cents in low quantities). The complete circuit is shown below:


The circuit operates as an astable oscillator driving a monostable with both the stable and monostable circuits implemented with 555s. The astable circuit is closely related to that shown in the TI datasheet:


When the 555 starts up the discharge pin (DISCH) is effectively unconnected. This allows the capacitor (C) to charge up through RA and RB. The 555 senses the voltage on the capacitor using the THRES pin. When the voltage is 2/3rds of the supply voltage the 555 allows the voltage on the capacitor to discharge via the discharge pin through RB. At the same time the 555 sets its output pin low. When the capacitor has drained such that the voltage falls below 1/3 of the supply voltage the TRIG pin senses this and sets the output high and “disconnects” the discharge pin. This allows the capacitor to charge up again and the process restarts creating an oscillation. In the APC RA is a variable resistor to allow the charge rate, and therefore frequency, to be controlled.

The Astable 555 outputs a fixed pulse length, in my build this was about 10 microseconds. While this is great and if connected to a speaker will produce an audible tone, the APC adds a second monostable circuit. Monostable circuits trigger on an input pulse, holding the output for a period of time regardless of the input pulse length. This allow the APCs pulse width to be varied. The longer pulses seem to help drive the speaker, but also have a moderating effect on the astables output frequency as multiple pulses can get masked. The 555 monostable circuit operation is almost identical to the astable oscillator, however the trigger input is not connected to the timing capacitor. This means the circuit does not “self trigger”, requiring an input (in this case from the astable circuit) instead. A second variable resistor allows the pulse width to be varied.

While there are many kits available, the circuit is easily breadboarded using a single 556 or two 555s. If you build an interesting APC let us know!

Filed under: classic hacks, digital audio hacks, Featured

by Nava Whiteford at September 17, 2015 02:01 PM

Nothing Special

Making an LV2 plugin GUI (yes, in Inkscape)

Told you I'd be back.

So I made this pretty UI last post but I never really told you how to actually use it (I'm assuming you've read the previous 2 posts, this is the 3rd in the series). Since I'm "only a plugin developer," that's what I'm going to apply it to. Now I've been making audio plugins for longer than I can hold my breath, but I've never bothered to make one with a GUI. GUI coding seems so boring compared to DSP and it's so subjective (user: "that GUI is so unintuitive/natural/cluttered/inefficient/pretty/ugly/slow etc. etc....") and I actually like the idea of using your ears rather than a silly visual curve, but I can't deny, a pretty GUI does increase usership. Look at the Calf plugins...

Anyhow, regardless of whether its right or wrong I'm going to make GUIs (that are completely optional, you can always use the host generated UI). I think with the infamous cellular automaton synth I will actually be able to make it easier to use, so the GUI is justifiable, but other than that they're all eye candy, so why not make 'em sweet? So I'll draw them first, then worry about making them an actual UI. I've been trying to do this drawing-first strategy for years but once I started toying with svg2cairo I thought I might actually be able to do it this time. Actually as I'm writing this paragraph the ball is still up in the air, so it might not pan out, but I'm pretty confident by the time you read the last paragraph in this almost-tutorial I'll have a plugin with a GUI.

(*EDIT 14 Sept 2015 - a big mistake was pointed out to me in my LV2_UI instantiation, updated below).

So lets rip into it:

One challenge I have is that I really don't like coding C++ much. I'm pretty much a C purist. So why didn't I use gtk? Well, cause it didn't have AVTK. Or ntk-fluid. With that fill-in-the-blank development style fluid lends to, I barely even notice that its C++ going on in back. Its a pretty quick process too. I had learned a fair bit of QT, but was forgoing that anyway, but with these new (to me) tools I had a head start and got to where I am relatively quickly (considering my qsvg widgets are now 3 years old and unfinished).

The other good news is that the DSP and UI are separate binaries and can have completely separate codebases, so I can still do the DSP in my preferred language. This forced separation is very good practice for realtime signal processing. DSP should be the top priority and should never ever ever have to wait for the GUI for anything.

But anyway, to make an LV2 plugin gui we'll need to add some extra .ttl stuff. So in manifest.ttl:
@prefix lv2:  <> .
@prefix rdfs: <> .
@prefix ui:   <> .
a lv2:Plugin, lv2:DelayPlugin ;
lv2:binary <> ;
rdfs:seeAlso <stuck.ttl> .
        a ui:X11UI;
        ui:binary <>
        lv2:extensionData ui:idle; . 

Thats not a big departure from the no-UI version, but we'd better make a to back it up. We've got a .cxx and .h from ntk-fluid that we made in the previous 2 posts, but its not going to be enough. The callbacks need to do something. But what? Well, they will be passing values into the control ports of the plugin DSP somehow. OpenAVproductions genius Harry Haaren wrote a little tutorial on it. The thing is called a write function. Each port has an index assigned by the .ttl and the dsp source usually has an enum to keep these numbers labeled. So include (or copy) this enum in the UI code, declare an LV2UI_Write_Function and also an LV2UI_Controller that will get passed in as an argument to the function. Both of these will get initialized with arguments that get passed in from the host when the UI instantiate function is called. The idea is the LV2_Write_Function is a function pointer that will call something from the host that stuffs data into the port. You don't need to worry about how that function works, just feel comfort knowing that where ever that points, it'll take care of you. In a thread safe way even.

Another detail (that I forgot when I first posted this yesterday) is declaring that this plugin will use the UI you define in the manifest.ttl. What that means is in the stuck.ttl you add the ui extension and declare the STUCKURI as the UI for this plugin:
@prefix doap:  <> .
@prefix foaf: <> .
@prefix rdf: <> .
@prefix rdfs: <> .

@prefix lv2: <> .
@prefix ui: <> .

a lv2:Plugin, lv2:DelayPlugin ;
doap:name "the infamous stuck" ;
doap:maintainer [
foaf:name "Spencer Jackson" ;
foaf:homepage <> ;
foaf:mbox <> ;
] ;
lv2:requiredFeature <> ;
lv2:optionalFeature lv2:hardRTCapable ;
ui:ui <> ;

lv2:port [

So enough talk. Lets code.
For LV2 stuff we need an additional header. So in an extra code box (I used the window's):
#include "lv2/"

It will be convenient to share a single source file for which port is which index. That eliminates room for error if anything changes. So in an additional code box (the Aspect Group's since the window's are all full):
We also will need 2 additional members in our StuckUI class. Do this by adding 2 "declarations" in fltk. The code is:
LV2UI_Write_Function write_function;

LV2UI_Controller controller;

And finally in each callback add something along the lines of (i.e. for the Stick It! port):

This is calling the write function with the controller object, port number, "buffer" size (usually the size of float), protocol (usually 0, for float), and pointer to a "buffer" as arguments. So now when the button is clicked it will pass the new value on to the DSP in a threadsafe way. The official documentation of write functions is here. The floatvalue member of dials and buttons is part of ffffltk (which was introduced in the other parts of this series) which was added exclusively for LV2 plugins. Cause they always work in floats. Or in atoms, which is a whole other ball of wax. Really though, its really easy to do this as long as you keep it to simple float data like a drone gain.

Another important thing you must add to the fluid design is a function called void idle(). In this function add a code block that has these 2 lines:

To help clarify everything here's a screenshot of ntk-fluid once I've done all this. Its actually a pretty good overview what we've done so far:

Possibly the biggest departure from what we've done previously is now the program will not be a stand-alone binary, but a library that has functions to get called by the host (just like in the DSP). This means some major changes in our stuck_ui_main.cxx code.

For the GUI the most important functions are the instantiation, cleanup, and port event. To use NTK/fltk/ffffltk you will need to use some lv2 extensions requiring another function called extension_data but we'll discuss it later. The instantiation is obviously where you create your window or widget and pass it back to the host, cleanup deallocates it, and the port event lets you update the GUI if the host changes a port (typically with automation). We'll present them here in reverse order since the instantiation with NTK ends up being the most complex. So port event is fairly straightforward:
void stuckUI_port_event(LV2UI_Handle ui, uint32_t port_index, uint32_t buffer_size, uint32_t format, const void * buffer)
    StuckUI *self = (StuckUI*)ui;
      float val = *(float*)buffer;
        case STICKIT:
        case DRONEGAIN:
        case RELEASE:

The enlightening thing about doing a UI is that you get to see both sides of what the LV2 functions do. So just like in the widget callbacks you send a value through the write_function, this is like what the write function does on the other side, first you recast the handle as your UI object so you can access what you need, then make sure its passing the format you expect (0 for float, remember?). Then assign the data corresponding to the index to whatever the value is. This keeps your UI in sync if the host changes a value. Nice and easy.

Next up is the simplest: Cleanup:
void cleanup_stuckUI(LV2UI_Handle ui)
    StuckUI *self = (StuckUI*)ui;

    delete self;

No explanation necessary. So that leaves us with instantiation. This one is complex enough I'll give it to you piece by piece. So first off is the setup, checking that we have the right plugin (this is useful when you have a whole bundle of plugins sharing code), then dynamically allocating a UI object that will get returned as the plugin handle that all the other functions use, and declaring a few variables we'll need temporarily:
static LV2UI_Handle init_stuckUI(const struct _LV2UI_Descriptor * descriptor,
        const char * plugin_uri,
        const char * bundle_path,
        LV2UI_Write_Function write_function,
        LV2UI_Controller controller,
        LV2UI_Widget * widget,
        const LV2_Feature * const * features)
    if(strcmp(plugin_uri, STUCK_URI) != 0)
        return 0;

    StuckUI* self = new StuckUI();
    if(!self) return 0;
    LV2UI_Resize* resize = NULL;

Then we save the write_function and controller that got passed in from the host so that our widgets can use them in thier callbacks:
    self->controller = controller;
    self->write_function = write_function;

Next stop: checking features the host has. This is where using NTK makes it a bit more complicated. The host should pass in a handle for a parent window and we will be "embedding" our window into the parent. Another feature we will be hoping the host has is a resize feature that lets us tell the host what size the window for our plugin should be. So we cycle through the features and when one of them matches what we're looking for we temporarily store the data associated with that feature as necessary:
    void* parentXwindow = 0;
    for (int i = 0; features[i]; ++i)
        if (!strcmp(features[i]->URI, LV2_UI__parent))
           parentXwindow = features[i]->data;
    else if (!strcmp(features[i]->URI, LV2_UI__resize))
           resize = (LV2UI_Resize*)features[i]->data;

Now we go ahead and startup our UI window, call the resize function with our UI's width and height as arguments and call a special NTK function called fl_embed() to set our window into the parent window. It seems this function was created specially for NTK. I haven't found it in the fltk source or documentation so I really don't know much about it or how you'd do it using fltk instead of NTK. But it works. (You can see the NTK source and just copy that function). EDIT: one important detail that I missed is that you are supposed to fill in the LV2UI_Widget that the host passes with your UI widget. When your UI is x11 based you pass in the xid from the x window or at least set it to zero. This is done below after fl_embed().  Once that's done we return our instance of the plugin UI object:
    self->ui = self->show();
    // set host to change size of the window
    if (resize)
       resize->ui_resize(resize->handle, self->ui->w(), self->ui->h());
    fl_embed( self->ui,(Window)parentXwindow);

    *widget = (LV2UI_Widget)fl_xid(self->ui);

    return (LV2UI_Handle)self;

Ok. Any survivors? No? Well I'll just keep talking to myself then. We mentioned the extension_data function. This function gets called and can do various special functions if the host supports them. Similar to the port event, the same extension_data function gets called with different indexed functions and we can return a pointer to a function that does what we want when an extension we care about gets called. Once again we get to see both sides of a function we called. The resize stuff we did in instantiate can be used as a host feature like we did before or as extension data. As extension data you can resize your UI object according to whatever size the host requests. This extension isn't necessary for an NTK GUI but since the parent window we embedded our UI into is a basic X window, its not going to know to call our fltk resize functions when its resized.

In contrast, a crucial extension for an NTK GUI is the idle function. Because similarly the X window doesn't know anything about fltk and will never ask it to redraw when something changes. So this LV2 extension exists for the host to call a function that will check if something needs to get updated and redrawn on the screen. We made an idle function already to call in our StuckUI object through fluid, but we need to set up the stuff to call it. Our extension_data function will need some local functions to call:
static int
idle(LV2UI_Handle handle)
  StuckUI* self = (StuckUI*)handle;
  return 0;

static int
resize_func(LV2UI_Feature_Handle handle, int w, int h)
  StuckUI* self = (StuckUI*)handle;
  return 0;

Hopefully its obvious what they are doing. The LV2 spec has some stucts that are designed to interface  between these functions and the extension_data function, so we declare those structs as static constants, outside of any function, with pointers to the local functions :
static const LV2UI_Idle_Interface idle_iface = { idle };
static const LV2UI_Resize resize_ui = { 0, resize_func };

And now we are finally ready to see the extension_data function:
static const void*
extension_data(const char* uri)
  if (!strcmp(uri, LV2_UI__idleInterface))
    return &idle_iface;
  if (!strcmp(uri, LV2_UI__resize))
    return &resize_ui;
  return NULL;

You see we just check the URI to know if the host is calling the extension_data function for an extension that we care about. Then if it is we pass back the struct corresponding to that extension. The host will know how these structs are formed and use them to call the functions to redraw or resize our GUI when it thinks its necessary. We aren't really guaranteed timing for these but most hosts are gracious enough to call it at a frequency that gives pretty smooth operation. Thanks hosts!

So, its now time for the ugly truth to rear its head. Full disclosure: this implementation of the resizing extension code doesn't work at all. The official documentation describes this feature as being 2 way, host to plugin or plugin to host. We've already used it as plugin to host and that works perfectly, but when trying to go the other way I can't get it to work. The trouble is when we declare and initialize the LV2UI_Resize object. The first member of the struct is type LV2UI_Feature_Handle which is really just a void* which should really just be a pointer to whatever data the plugin will want to use when the function in the 2nd member of the struct gets called. Well for us when resize_func gets called we want our instance of the StuckUI that we created in init_stuckUI(). That would allow us to call the resize function. But we can't because its out of scope, and the struct must be a constant so it can't be assigned in the instantiate function. So I just have a 0 as that first argument and actually have the call to size() commented out.

Perhaps there's a way to do it, but I can't figure it out. I included that information because I hope to figure out how and someday make my UI completely resizable. The best way to find out, I figure, is to post fallacious information on the Internet and pretty soon those commenters will come tell me how wrong and stupid I am. Then I can fix it.

As a workaround you can put in your manifest.ttl this line:
lv2:optionalFeature ui:noUserResize ;

Which will at least make it not stupidly sit there the same size all the time even when the window is resized. If the host supports it.

EDIT: I understand that returning the correct LV2UI_Widget from instantiate should allow the plugin to resize without using the resize extension. It also allows for keyboard entry or modifiers.  Then the workaround is unnecessary.

"So if its not even resizable why in the world did you drag us through 3 long detailed posts on how to make LV2  GUIs out of SCALABLE vector graphics?!" you ask. Well, you can still make perfectly scalable guis for standalone programs, and just having a WYSIWYG method of customized UI design is hopefully worth something to you. It is to me, though I really hope to make it resizable soon. It will be nice to be able to enlarge a UI and see all the pretty details, then as you get familiar with it shrink it down so you can just use the controls without needing to read the text. Its all about screen real estate. And tiling window managers for me.

So importantlyin LV2 we need to have a standard function that passes to the host all these functions so the host can call them as necessary. Similar to the DSP side you declare a descriptor which is really a standard struct that has the URI and function pointers to everything:
static const LV2UI_Descriptor stuckUI_descriptor = {

And lastly the function that passes it back. Its form seems silly for a single plugin, but once again you can have a plugin bundle (or a bundle of UIs) sharing source that passes the correct descriptor for whichever plugin is requested (by index). It looks like this:
const LV2UI_Descriptor* lv2ui_descriptor(uint32_t index)
    switch (index) {
    case 0:
        return &stuckUI_descriptor;
        return NULL;

As a quick recap, here are the steps to go from Inkscape to Carla (or your favorite LV2 plugin host):
1. Draw a Gui in Inkscape
2. Save the widgets as separate svg files
3. Convert to cairo code header files
4. Edit the draw functions to animate dials, buttons, etc. as necessary.
5. Create the GUI in ntk-fluid with the widgets placed according to your inkscape drawing
6. Include the ffffltk.h and use ffffltk:: widgets
7. Assign them their respective draw_functions() and callbacks
8. Add the write_function, controller members, and the idle() function
9. Export the source files from fluid and write a ui_main.cxx
10. Update your ttl
11. Compile, install, and load in your favorite host.

Our plugin in Jalv.gtk

So you now have the know-how to create your own LV2 plugin GUIs using Inkscape, svg2cairo, ffffltk, ntk-fluid, and your favorite editor. In 11 "easy" steps. You can see the source for the infamous Stuck that I developed this workflow through in my infamous repository. And soon all the plugins will be ffffltk examples. I'll probably refine the process and maybe I'll post about it. Feel free to ask questions. I'll answer to the best of my ability. Enjoy and good luck.

As an aside, in order to do this project. I ended up switching build systems. Qmake worked well, but I mostly just copied the script from Rui's synthv1 source and edited it for each plugin. Once I started needing to customize it more to generate separate dsp and ui binaries I had a hard time. I mostly arbitrarily decided to go with cmake. The fact that drmr had a great cmake file to start from was a big plus. And the example waf file I saw freaked me out so I didn't use waf. I guess I don't know python as much as I thought. Cmake seemed more like a functional programming language, even if it is a new syntax. I was surprised that in more or less a day I was able to get cmake doing exactly what I wanted. I had to fight with it to get it to install where I wanted (read: obstinate learner), but now its ready for whatever plugins I can throw at it. So that's what I'm going to use going forward. I'll probably leave the .pro files for qmake so if you want to build without a GUI you can. But maybe I won't. Complain loudly in the comments if you have an opinion.

by Spencer ( at September 17, 2015 09:46 AM

September 15, 2015

Pid Eins

Preliminary systemd.conf 2015 Schedule

A Preliminary systemd.conf 2015 Schedule is Now Online!

We are happy to announce that an initial, preliminary version of the systemd.conf 2015 schedule is now online! (Please ignore that some rows in the schedule link the same session twice on that page. That's a bug in the web site CMS we are working on to fix.)

We got an overwhelming number of high-quality submissions during the CfP! Because there were so many good talks we really wanted to accept, we decided to do two full days of talks now, leaving one more day for the hackfest and BoFs. We also shortened many of the slots, to make room for more. All in all we now have a schedule packed with fantastic presentations!

The areas covered range from containers, to system provisioning, stateless systems, distributed init systems, the kdbus IPC, control groups, systemd on the desktop, systemd in embedded devices, configuration management and systemd, and systemd in downstream distributions.

We'd like to thank everybody who submited a presentation proposal!

Also, don't forget to register for the conference! Only a limited number of registrations are available due to space constraints! Register here!.

We are still looking for sponsors. If you'd like to join the ranks of systemd.conf 2015 sponsors, please have a look at our Becoming a Sponsor page!

For further details about systemd.conf consult the conference website.

by Lennart Poettering at September 15, 2015 10:00 PM

September 13, 2015

Libre Music Production - Articles, Tutorials and News

August/September 2015 – Interviews and Linux audio news

Our newsletter for August/September is now sent to our subscribers (471 people!). If you have not yet subscribed, you can do that from our start page.

You can also read the latest issue online. In it you will find:

  • New 'LMP Asks' interviews
  • Linux Audio news
  • Guitarix demos and presets
  • New preset and patch website

by admin at September 13, 2015 09:27 PM

Building a synth module using a Raspberry Pi

Ever since I did an acid set with my brother in law at the now closed bar De Vinger I’ve been playing with the idea of creating some kind of synth module out of a Raspberry Pi. The Raspberry Pi 2 should be powerful enough to run a complex synth like ZynAddSubFX. When version  2.5.1 of that synth got released the idea resurfaced again since that version allows to remote control a running headless instance of ZynAddSubFX via OSC that is running on for instance a Raspberry Pi. I looked at this functionality before a few months ago but the developer was just starting to implement this feature so it wasn’t very usable yet.

zynaddsubfx-ext-guiBut with the release of ZynAddSubFX 2.5.1 the stabilitity of the zynaddsubfx-ext-gui utility has improved to such an extent that it’s a very usable tool. In the above screenshot you can see zynaddsubfx-ext-gui running on my notebook with Ubuntu 14.04 controlling a remote instance of ZynAddSubFX running on a Raspberry Pi.

So basically all the necessary building blocks for a synth module are there. Coupled with my battered Akai MPK Mini and a cheap PCM2704 USB DAC I started setting up a test setup.

For the OS on the Raspberry Pi 2 I chose Debian Jessie as I feel Raspbian isn’t getting you the most out of your Pi. It’s running a 4.1.6 kernel with the 4.1.5-rt5 RT patch set, which applied cleanly and seems to run so far:

pi@rpi-jessie:~$ uname -a
Linux rpi-jessie 4.1.6-rt0-v7 #1 SMP PREEMPT RT Sun Sep 13 21:01:19 CEST 2015 armv7l GNU/Linux

This isn’t a very clean solution of course so let’s hope a real 4.1.6 RT patch set will happen or maybe I could give the 4.1.6 PREEMPT kernel that rpi-update installed a try. I packaged a headless ZynAddSubFX for the RPi on my notebook using pbuilder with a Jessie armhf root and installed the package for Ubuntu 14.04 from the KXStudio repos. I slightly overclocked the RPi to 1000MHz and set the CPU scaling governor to performance. The filesystem is Ext4, mounted with noatime,nobarrier,data=writeback.

To get the USB audio interface and the USB MIDI keyboard into line I had to add the following line to my /etc/modprobe.d/alsa.conf file:

options snd-usb-audio index=0,1 vid=0x08bb,0x09e8 pid=0x2704,0x007c

This makes sure the DAC gets loaded as the first audio interface, so with index 0. Before adding this line the Akai would claim index 0 and since I’m using ZynAddSubFX with ALSA it couldn’t find an audio interface. But all is fine now:

pi@rpi-jessie:~$ cat /proc/asound/cards
 0 [DAC            ]: USB-Audio - USB Audio DAC
                      Burr-Brown from TI USB Audio DAC at usb-bcm2708_usb-1.3, full speed
 1 [mini           ]: USB-Audio - MPK mini
                      AKAI PROFESSIONAL,LP MPK mini at usb-bcm2708_usb-1.5, full speed

So no JACK as the audio back-end, the output is going directly to ALSA. I’ve decided to do it this way because I will only be running one single application that uses the audio interface so basically I don’t need JACK. And JACK tends to add a bit of overhead, you barely notice this on a PC system but on small systems like the Raspberry Pi JACK can consume a noticeable amount of resources. To make ZynAddSubFX use ALSA as the back-end I’m starting it with the -O alsa option:

zynaddsubfx -r 48000 -b 256 -I alsa -O alsa -P 7777

The -r option sets the sample rate, the -b option sets the buffer size, -I is for the MIDI input and the -P option sets the UDP port on which ZynAddSubFX starts listening for OSC messages. And now that’s the cool part. If you then start zynaddsubfx-ext-gui on another machine on the network and tell it to connect to this port it starts only the GUI and sends all changes to the GUI as OSC messages to the headless instance it is connected to:

zynaddsubfx-ext-gui osc.udp://

Next up is stabilizing this setup and testing with other kernels or kernel configs as the kernel I’ve cooked up now isn’t a viable long-term solution. And I’d like to add a physical MIDI in and maybe a display like described on the Samplerbox site. And the project needs a casing of course.

The post Building a synth module using a Raspberry Pi appeared first on

by jeremy at September 13, 2015 09:26 PM

September 12, 2015

Media units

media_units_editThe MK808 Android TV stick with a PCM2704 USB audio interface runs Debian Jessie with MPD and serves as our mediaplayer for audio files. It draws its power from the USB port of our cable modem so it’s always on. Most of the time Indie Pop Rocks is playing. It’s hooked up to the network via WiFi. We use MPDroid to control it.

The Raspberry Pi runs OpenELEC with Kodi. We use this for watching all kinds of video files that we stream from our NAS (an aging WD My Book Live that runs Debian Lenny) via an NFS share. It is connected to the network via ethernet.

The Chromecast is for watching Netflix. When we just got it we had some issues with connecting it to the network but after replacing our old router with an ASUS RT-AC68U it worked flawlessly.

The Technics SL-1210MK2 with Ortofon headshell and cartridge is for listening music on vinyl, you know those round black plastic units from the past with grooves in them. It doesn’t have any network connections and doesn’t run an OS. It does send electrical current to a NAD C 325BEE Stereo Integrated Amplifier with Dali Concept 2 speakers. Yeah, I’m a 2.0 guy.

The TV is an old pre Smart TV Samsung but as it still works we probably won’t replace it for the time being. It does have CEC so we can control the TV, RPi and Chromecast with a single remote.

The post Media units appeared first on

by jeremy at September 12, 2015 11:16 AM

September 11, 2015

Libre Music Production - Articles, Tutorials and News

Infamous Plugins v0.1 released

Spencer Jackson has just released v0.1 of the Infamous Plugins suite. While there are already many bread and butter plugins available for Linux, Infamous plugins aim to "fill some holes, supplying non-existing plugins for linux audio".

While Infamous plugins were previously available, this is the first release with custom GUI's. The suite includes the following plugins -

by Conor at September 11, 2015 09:46 AM

September 10, 2015

Scores of Beauty

Agile Music Edition?

Throughout the lifetime of this blog we have been propagating a certain perspective on music engraving and music editing. Its idea is to take advantage of methods, tools and workflows from software development and computer science. Agile software development is one of today’s strong tendencies in software development, sharing important parts of our endeavors and ideas. So why not give it a shot and draw from it, working towards agile music edition?

There have been months since the last post, which is unprecedented. Over two years we have published one post a week as an average. In a way this is the reason for the long gap – as what you are reading right now is actually the 100th post on Scores of Beauty :-) . The post I had written for this occasion has been pending for long but was blocked on something that has yet to be announced, so we’ve ended up not publishing anything lately. The situation got more and more pressing, also because we have suggestions from guest authors again, and so finally this post formed in my mind.

I had taken The Small Agile Book (seems to be German only) with me for holidays. Originally I intended to get some background about a contemporary development technique buzz-phrase (to compensate for my lack of formal CS education). But when reading the (recommendable) volume it struck me that agile has a lot in common with my ideas of “score development”. The most obvious affinity is the idea of a self-organizing team with shared responsibilities and getting away from the Waterfall development model.

The “Waterfall” Development Model

Image from Wikipedia

In traditional software development processes the lifecycle of a project is very strictly organized in consecutive stages as shown in the above visualization. There are discrete responsibilities for these stages, and larger companies even have dedicated departments for each of them. Music edition often is organized in a similar way, with publishing houses having established strict workflows that strikingly resemble that waterfall. There is a sequence of consecutive steps that could be visualized in a similar manner, for example: critical review (main editor) ⇒ music entry ⇒ review (house’s editor) ⇒ proof-reading ⇒ engraving’s beautification ⇒ prepress.
Usually these steps are assigned to separate people with strict separation of responsibilities. But even more important is that – like in the Waterfall model – the steps have to be processed sequentially, that is, any step has to be finished completely before the next stage can be entered.

In software development this Waterfall approach has proven to create significant problems, and I have more than once discussed the specific issues that arise (in music editing) from having to pass opaque binary files around to process them sequentially. Agile development has come up with numerous offerings to overcome them, and our ideas for collaborative music edition workflows (as can for example be seen in our crowd editing project or the posts tagged with version control) seem to be heading in a quite similar direction.

I think this explains to some extent the reservations traditional publishers (and editors) have with our approach. They presumably experience the same sense of missing security and “predictability” that decision makers in software companies have or had with regard to agile ideas. Realizing this might be a first step to also develop new argumentation strategies for us.

The Multi Disciplinary and Self-organizing Team

One central concept of agile is the multi disciplinary and self-organizing team as opposed to specialized teams (e.g. analysts, architects, coders, testers etc.). In agile teams roles can be assigned with much more flexibility, enabling direct communcation and short working cycles. This is quite intuitively applicable to LilyPond based edition projects where in principle everybody can do anything. There are specializations, for example some people will have responsibility for scholarly issues while others have a sharper eye regarding engraving decisions. But still they can seamlessly collaborate on the same “thing” (i.e. the code repository).

Adaptive Planning

Another central concept of agile is the notion of adaptive planning. While planning is of course an important factor it doesn’t have to be done completely up-front. Instead a project is organized in short iterations that are each finished off with a retrospective. This essential step is used to plan the next iteration but also to evaluate and improve the global plan and details of the workflow.

In a typical collaborative engraving project this happens too, basically all the time. But I think the problem is that it happens in a more or less random fashion. This probably spoils a significant portion of the potential that consciously applied agile methods provide.

Continuous Delivery

An agile project should try to deliver results as early as possible and only then refine and extend the product continuously. This is done by constant prioritisation so that basically the most important parts are developed first (and it’s constantly reconsidered what the most important parts are). The idea is that this will be more satisfying for the “customer” who gets tangible results at early stages, and also for the team for the same reasons. From a certain moment onwards the project could be stopped and deployed at any time – for example when budget or deadline have been reached – but still deliver a usable product.

This idea can be applied to music edition quite well, as the only prerequisite for delivering a score is the plain music entry. Everything else – critical review, fixing page layout and engraving decisions (at least when using LilyPond with its exceptional default engraving quality) – can be considered as “only” refining an already usable product. Of course you wouldn’t do without any of it in a printed critical edition but there are many use cases (especially performance materials) where it may make a huge difference. An agile music edition project would place the music entry with highest priority in the project backlog and everything else later. If other work items turn out to be critical at any point (e.g. writing new functions to handle lyrics or part combination or whatever) they can easily be inserted in the backlog (see “adaptive planning” above).

Kanban: Optimizing Processes

One thing I realized would have really helped us when producing “Das trunkne Lied”: optimizing workflows with Kanban (or any comparable method). Kanban is a method to streamline the use of resources in production chains, originally developed for car manufacturers. Basically it does so by (constantly) analyzing the process to identify and remove bottleneck situations, typically by comparing the number of work items that pass each stage within any given project iteration. It is a fascinating concept of deferring the control of operations to those who actually do the work, relieving the project as a whole from a lot of controlling overhead.

Having a team with shared responsibilities is already a good foundation as it allows any team member to do “what has to be done” at any given moment. Nevertheless, we often ran into the situation that there was too little or too much work available at the moment. Time will be used most efficiently when the “work items” flow as continuously as possible through their stages (e.g. music entry, proof-reading, revision). Tasks such as preparation of part templates, handling combined parts or general programming tasks are also part of that equation. While organizing this stream of work will work out more or less smoothly by itself it will definitely benefit from having proven concepts such as Kanban applied.

Where Are We Today?

Touching briefly on the subject indicates that there are significant correlations between agile software development and the potential of music edition workflows as described on Scores of Beauty. However, it becomes clear that there is much more to be gained by a conscious application of agile principles. Obviously some ideas have only been touched upon or rather “happen” randomly, others haven’t even been thought of, e.g. the importance of direct and regular (structured) personal communication or the constant measuring of progress. For example visualizing a project with a burndown chart (or better its counterpart, the burnup chart) is something that strikes me as a worthwile idea. Implementing the idea of story points as a measurement unit for work achieved or to be done will also be very helpful.

In other cases the agile concepts can’t so easily be applied to music edition. I have no clear idea how pair programming, unit testing or user stories could be mapped to our workflows. Of course it’s no good to follow agile by the letter, but I have the suspicion it would be fruitful to at least think about these aspects.

This post is definitely a sketch, I neither give a report of success nor elaborate on concrete plans. However, I would be really happy if someone or we as a community would explore this field more thoroughly. It’s one of the characteristics of working with LilyPond (and its related tools) that you can approach projects like a software developer. So why not take the idea of integrating software development techniques into music engraving a step further and create agile principles of music edition?

by Urs Liska at September 10, 2015 10:00 PM

September 09, 2015

Linux Audio Announcements -

[LAA] ISMIR 2015 Reminder for Late-breaking Demo Contributions

[Please disseminate. Apologies for cross-posting]

ISMIR 2015 Reminder for Late-breaking Demo Contributions

We would like to remind you of the possibility to submit extended
abstracts to the ISMIR 2015 late-breaking demo session. This session is
dedicated to the presentation of preliminary results, ideas, applications
or system prototypes that are not yet fully formed nor systematically
evaluated, but of interest to the MIR community.

Authors are encouraged to submit extended one-page (preferred) or
two-page (maximum) abstracts to the late-breaking demo session here:

Submit your late-breaking demo contribution in PDF format according to
one of the following templates:

Submissions do not need to be anonymized. Please note that all
late-breaking demo presentations will be posters and they may include or
not a demo component. Submissions must clearly state if there is to be a
demonstration component, and briefly address any special technical

To guarantee the proper allocation of the late-breaking demos in the
conference venue, the submission system will be closed by October 26,
2015 . We will screen submissions for formatting compliance after they
are received, acceptance/rejection notifications will be sent to authors
as soon as the submissions are screened. Please try to upload finished
abstracts instead of placeholder documents. This will largely facilitate
our job to decide which contributions to accept.

Note that at least one author of an accepted late-breaking demo abstract
must be registered for the conference and present the work there.

All questions should be directly addressed to the ISMIR 2015
late-breaking demo chairs at

Dipl.-Ing. (FH) Christian Dittmar

International Audio Laboratories Erlangen
Am Wolfsmantel 33
91058 Erlangen, Germany

Phone +49 9131 85-20538
Fax +49 9131 85-20524
Mobile O2 +49 176 245 663 91
Mobile TK +49 160 949 224 57

Linux-audio-announce mailing list

by at September 09, 2015 11:35 AM

September 08, 2015

Linux Audio Announcements -

[LAA] Infamous Plugins 0.1 -- the Eye-Candy Release

After a nail-biting wait on the edge of your seats, the Eye-Candy Release
is upon us!

As promised, the donate/wait goal is now fulfilled and the GUIs for the
Infamous Plugins are all released. A lot has changed (even the install
process), so check them out. Find them at our new site:

As usual please report any bugs you find!

As a side note, the donate-wait thing worked pretty well for me, despite
resulting in a "deadlock". Since the project was done, and not enough
people were interested financially I was able to move on to working on
other projects for 11 months (osc2midi and rkr lv2 are the results). I may
or may not try it again, but really, if you have several projects that you
can rotate around, "just wait" isn't so bad. The whole goal is to bring
income to our much-needed developers, I realize that, but this does
generate a bit of buzz and a month of time was bought back, which was more
than I'd ever received before. Huge thanks to the donors!

And to all:
_Spencer (ssj71)

by at September 08, 2015 05:41 AM


full Embedded Artist concert online

The full concert of my project Embedded Artist which I run with Wolfgang Spahn, performed at Spektrum Berlin the 28. August 2015:

"Embedded Artist" is a media performance that combines four different layers merged into one visual entity. As a contemporary Gesamtkunstwerk, 3D models, video scratching, live camera, and mechanical effects are projected and combined within the space. The sound of the performance combines mechanical industrial noise with digital synthetic sound, which create structures and patterns connected to the visual output.
In this performance Spahn and Steiner have developed their own system to control multiple embedded systems using Pure Data, Raspberry Pi, Raspian, and Python, where as hardware components several Raspberry Pi's are combined with Paper-Duino-Pi's remote controlled via OSC from the performers laptops.
The 3D models are animated in OpenGL, where video and live cameras each run on a Raspberry Pi. Mechanical and optical effects such as fragmented projections are generated by servos and glass prisms, which are projected back onto the walls. "Embedded Artist" is filming the audience as well, which are additionally re-projected back into the space.

by herrsteiner ( at September 08, 2015 02:17 AM

September 07, 2015

Create Digital Music » open-source

Here are two new ways of combining a synth with Arduino


In the last couple of weeks, we’ve gotten not just one, but two new synthesizers that piggy-back on the Arduino electronics platform. The result, then, is instruments that you can modify via Arduino code.

You’ll need an Arduino for each of these to work, so figure on adding some bucks to the purchase price. (I also recommend only using a real Arduino or Genuino; the clones I’ve found are often unreliable, and it’s better to support the developers of the platform.)

The miniATMEGATRON from Soulsby Synthesizers is especially appealing. It uses the same grungy, nicely lo-fi sound engine of the Atmegatron, but it’s in kit form. It’s a pretty easy kit to put together – I watched folks assembling them in Brno earlier this summer, and they’ll be accessible to anyone with some soldering experience (or some supervision).

Just built as-is, the miniATMEGATRON is fun, but not terribly useful – it just plays back some sequences. Where it gets interesting is if you either write your own code or, more likely, add the MIDI “hack.” This involves adding a MIDI port to the Arduino. Once you do that, this is a playable MIDI synth, complete with clock sync. And then there are some fun features – 16 PWM waveforms, an LFO with 16 waveforms of its own, modulation extras, and a digital filter with 15 algorithms. There’s also a “wavecrusher” and phaser and distortion effects. Basically, you get a lot of grungy digital fun in one package.

The code is open source, though this isn’t strictly speaking open source hardware (only the firmware is open).

If you want a ready-to-play instrument, the original Atmegatron is really your best bet, and comes in a beautiful case. It’s also still possible to modify using the friendly Arduino development environment. But the miniATMEGATRON is a steal for DIYers, and I suspect for them, the soldering and hacking will in fact be a selling point.

Soulsby miniATMEGATRON


Tasty Chips, who made the analog Sawbench before, are back with an Arduino Piggyback Synthesizer. The concept as far as Arduino is the same as Soulsby’s: you use this board as an add-on to Arduino, and then use Arduino coding to hack your own custom functions. But the Tasty Chips route is analog, like the Sawbench. You get a fully-analog oscillator, an analog VCA, and low-pass resonant filter.

You can also do frequency modulation with sine or saw, controlled via mod wheel or MIDI. That’s a good thing, as otherwise I find a single oscillator setup can get a bit bland – analog or not.

What Tasty Chip have done that frankly I wish Soulsby had is add MIDI right on the board. In fact, you get both in and thru built in. As with the Soulsby, MIDI functionality leans on the Arduino. It’s 59€ without the Arduino, or bundled for 79€.

Arduino Piggyback Synthesizer A Hackable Analog Synth

Both boards also rely on USB power, but with a proper adapter, you can plug into a wall socket, so these will stand on their own.

What I’m interested to see is if users find clever uses for the Arduino hacking aspect. You could certainly build novel applications into firmware by modifying the code. On the other hand, these shields block the ports on the Arduino, which means you can’t easily take advantage of Arduino’s ability to hook up knobs and switches and drive motors and the like. (Here, too, there’s an edge to Tasty Chip – they’ve added header to the top, and they haven’t used up all the connections on the Arduino, so if you keep the boards side by side, you can still, for instance, add your own knob.)

That said, at these prices, both boards provide some great musical fun and some easy hackability.

And both makers could provide some added stimulation with promised tutorials.

I’m curious what readers think and what you do with them if you pick them up. Do let us know.

Full disclosure: we of course make the MeeBlip, which means we’re thinking about these very questions a lot. (The MeeBlip isn’t Arduino-based, but it is hackable and open and built on the AVR platform with our own Assembly code, as you can check out on GitHub.)

The post Here are two new ways of combining a synth with Arduino appeared first on Create Digital Music.

by Peter Kirn at September 07, 2015 07:51 PM

Qsynth 0.4.0 - Summer'15 release frenzy continued...

So, this Summer'15 release frenzy is not over, yet.

Qsynth 0.4.0 is now released.

Qsynth is a FluidSynth GUI front-end application written in C++ around the Qt framework using Qt Designer.

Project page:

Qsynth is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

  • Desktop environment session shutdown/logout management has been also adapted to Qt5 framework.
  • Single/unique application instance control adapted to Qt5/X11.
  • Output meter scale text color fixed on dark color schemes.
  • Prefer Qt5 over Qt4 by default with configure script.
  • Complete rewrite of Qt4 vs. Qt5 configure builds.
  • A new top-level widget window geometry state save and restore sub-routine is now in effect.
  • Fixed for some strict tests for Qt4 vs. Qt5 configure builds.
  • German (de) translation update (by Guido Scholz, thanks).

Have fun, always!

Flattr this

by rncbc at September 07, 2015 05:30 PM

Linux Audio Announcements -

[LAA] Qsynth 0.4.0 - Summer'15 release frenzy continued...

So, this Summer'15 release frenzy is not over, yet.

Qsynth 0.4.0 is now released.

Qsynth [1] is a FluidSynth [2] GUI front-end application written in C++
around the Qt framework [3] using Qt Designer.


Project page:


- source tarball:

- source package:

- binary packages:

Qsynth [1] is free, open-source Linux Audio [4] software, distributed
under the terms of the GNU General Public License (GPL) version 2 or
later [5].

- Desktop environment session shutdown/logout management has been also
adapted to Qt5 framework.
- Single/unique application instance control adapted to Qt5/X11.
- Output meter scale text color fixed on dark color schemes.
- Prefer Qt5 over Qt4 by default with configure script.
- Complete rewrite of Qt4 vs. Qt5 configure builds.
- A new top-level widget window geometry state save and restore
sub-routine is now in effect.
- Fixed for some strict tests for Qt4 vs. Qt5 configure builds.
- German (de) translation update (by Guido Scholz, thanks).

See also:


Have fun, always!
rncbc aka. Rui Nuno Capela
Linux-audio-announce mailing list

by at September 07, 2015 04:34 PM

September 06, 2015

A touch of music

Algorithmic composition: generating tonal canons with Python and music21

1x spiced up chord progression (intermediate step in canon generation)



According to wikipedia:
Algorithmic composition is the technique of using algorithms to create music.

Some algorithms that have no immediate musical relevance are used by composers as creative inspiration for their music. Algorithms such as fractals, L-systems, statistical models, and even arbitrary data (e.g. census figures, GIS coordinates, or magnetic field measurements) have been used as source materials.
In music, a canon is a contrapuntal compositional technique that employs a melody with one or more imitations of the melody played after a given duration (e.g., quarter rest, one measure, etc.).


Last year I wrote a series of articles on an easy method for writing some types of canons:
In at least one of those articles I claimed that the methods described there should be easy to use as basis for automation. Here I automate the method for writing a simple canon as explained in the first of these articles.


The code discussed below is available under GPLv3 license on github: Similar to the gcc compiler, the music you generate with this program is yours. The GPLv3 license only applies to the code itself.

It depends on free software only: python 2.7, music21 and MuseScore.


This program generates canons from a given chord progression in a fixed key (no modulations for now, sorry!).

It does NOT generate atonal or experimental music, that is, if you're willing to accept the limitations of the program. It can occasionally generate "grave" errors against common practice rules (e.g. parallel fifths/octaves) -> see further explanation.

It closely follows the method as explained in the  Tutorial on my technique for writing a Canon article referenced above. If you want to understand in detail how it all works, please read that article first, and then come back here. If you just want to experiment with the different program settings, continue :)

One thing is worth explaining in more detail: the article mentions in one of the first steps of the recipe that the composer can start from a chord progression, and "spice it up" to get a choral.  So: how to we get a computer to spice up a chord progression, without building in thousands of composition rules?

I introduced some note transformations that:
  • introduce small steps between notes so as to generate something that could be interpreted as a melody
  • do not fundamentally alter the harmonic function in the musical context
To accomplish this I replace with a sequence of notes without altering the total duration of the fragment, e.g.
  • original note (half note) -> original note (quarter note), neighbouring note (8th note), original note (8th note)
Other transformations  look at the current note and the next note, and interpolate a note in between (again, without changing total duration)
  • original note (half note), next note -> original note (quarter note), note between original note and next note (quarter note), next note
A property of generating melodies by spicing up lists of notes using this method is that after spicing up a list of notes, you can spice the spiced up list again to get an even spicier list (more complex melody, both in pitch and in rhythm).

Finally a warning for the sensitive ears: for a composer to write a choral that obeys all the rules of the "common practice" takes years of study and lots of practice. Given the extreme simplicity of the program, the computer doesn't have any of this knowledge and it will happily generate errors against the common practice rules (e.g. parallel fifths and octaves). Not always, but sometimes, as dicated by randomness. Note that this is an area in which the program could be improved, by checking for errors while spicing up and skipping proposed spicings that introduce errors against the common practice rules.

Yet, despite the extreme simplicity of the method, the results can be surprisingly complex and in some cases sound interesting.

How can I use it?

In its current form, the program is not really easy to install and use, at least if you have no computer experience:
  • You need to install the free python programming language from I recommend using version 2.7. Version 3 and later of python won't work! 
  • You also need to install the free music21toolkit for computer-aided musicology. Follow the instructions on their website. Music21 provides for vast amounts of music knowledge which would take a long time to write ourselves. I'm using only a fraction of its possibilities in the canon generator.
  • Then you need something to visualize and audition the MusicXml that is generated by the program. For our purposes, the free MuseScore program works perfectly.
  • Finally you need to get the free program from the github repository
The main function defined near the bottom of the file contains some parameters you can edit to experiment with the generator:
  • chords = "C F Am Dm G C"
    #You can insert a new chord progression here. 
  • scale = music21.scale.MajorScale("C")
    #You can define a new scale in which the notes of the chords should be interpreted here 
  • voices = 5
    #Define the number of voices in your canon here 
  • quarterLength = 2
    #Define the length of the notes used to realize the chord progression
    #(don't choose them too short, since the automatic spicing-up will make them shorter) 
  • spice_depth = 1
    # Define how many times a stream (recursively) should be spiced up
    # e.g. setting 2 will first spice up the chords, then again spice up the already spiced chords.
    # scores very quickly become rhytmically very complex for settings > 2
  • stacking = 1
    # the code can generate multiple versions of the spiced up chord progression, and use those
    # versions to create extra voices
    # e.g. setting stacking = 2 will turn a 3-voice canon into a 3*2 = 6-voice canon 
  • voice_transpositions = { VOICE1 : +12, VOICE2 : 0, VOICE3 : -12, VOICE4: -24, VOICE5: 0 }
    # allow extra octave jumps between voices

What does it sound like?

This is a simple example generated with the program with settings
  • chords = "C F Am Dm G C"
  • scale = music21.scale.MajorScale("C")
  • voices = 5
  • quarterLength = 2 
  • spice_depth = 1
  • stacking = 1 
  • voice_transpositions = { VOICE1 : 0, VOICE2 : 0, VOICE3 : -12, VOICE4: -24, VOICE5: 0 }  

Ideas for future improvements

I see many possible improvements, most of which are low-hanging fruits. Feel free to jump in and improve the code :D
  • fix a known bug related to octaviations in keys other than C
  • support modulations, i.e. keep musical key per measure/beat instead of over the complete chord progression
  • extend the code to cover the things explained in the later articles: crab and table canons
  • see if the method/code can be extended to generate canons at the third, fifth, ...
  • smarter spicing up of chord progressions to avoid parallel fifths/octaves (e.g. rejecting a proposed spice if it introduces an error in the overall stream); or use Dmitry Tymoczko's voice leading spaces to ensure better voice leading by construction.
  • protect the end chord from getting spiced up
  • implement more note transformations, e.g. appogiatura
  • experiment with more rhythms
  • how can we better spice up the chord progressions without messing up too much of the original harmonies?
  • ... 

by Stefaan Himpe ( at September 06, 2015 05:31 AM

September 03, 2015

GStreamer News

GStreamer Conference 2015: Schedule of Talks and Speakers available

The GStreamer Conference team is pleased to announce this year's lineup of talks and speakers covering again an exciting range of topics!

The GStreamer Conference 2015 will take place on 8-9 October 2015 in Dublin (Ireland) and will be co-hosted with the Embedded Linux Conference Europe (ELCE) and LinuxCon Europe.

Details about the conference and how to register can be found on the conference website.

This year's topics and speakers:

  • Interactive video playback and capture in the Processing Language via GStreamer · Andres Colubri
  • Distributed transcoding with GStreamer · Thiago Sousa Santos, Samsung
  • Tiled Streaming of UHD video in real-time · Arjen Veenhuizen, TNO
  • GStreamer and WebKit · Philippe Normand, Igalia
  • Hardware accelerated multimedia on TI’s Jacinto 6 SoC · Pooja Prajod, Texas Instruments
  • Demystifying the allocation query · Nicolas Dufresne, Collabora
  • Synchronised multi-room media playback and distributed live media processing and mixing with GStreamer · Sebastian Dröge, Centricular
  • Implementing a WebRTC endpoint in GStreamer: challenges, problems and perspectives · Dr Luis López, Kurento
  • OpenGL Desktop/ES for the GStreamer pipeline · Matthew Waters, Centricular
  • Robust lipsync error detection using gstreamer and QR Codes · Florent Thiery, Ubicast
  • GStreamer VAAPI: Hardware-accelerated decoding and encoding on Intel hardware · Víctor M. Jáquez L., Igalia
  • Colorspaces and HDMI (*) · Hans Verkuil, Cisco
  • GStreamer State of the union · Tim-Philipp Müller, Centricular
  • Video Filters and their applications · Sanjay Narasimha Murthy, Samsung
  • Camera Sharing and Sandboxing with Pinos · Wim Taymans, RedHat
  • Stereoscopic (3D) Video in GStreamer Redux · Jan Schmidt, Centricular
  • Bin It! AKA, How to use bins and bin subclasses to keep state local and easily manage dynamic pipelines · Vivia Nikolaidou, ToolsOnAir
  • The HeliosTv Distributed DVB stack · Romain Picard, SoftAtHome
  • How to contribute to GStreamer · Luis de Bethencourt, Samsung
  • GstPlayer - A simple cross-platform API for all your media playback needs · Sebastian Dröge, Centricular
  • Improving GStreamer performance on large pipelines: from profiling to optimization · Miguel París
  • Kurento Media Server: experiences bringing GStreamer capabilities to WWW developers · José Antonio Santos
  • ToolsOnAir's mixing pipeline architecture overview · Heinrich Fink, ToolsOnAir
  • Distributed Acoustic Triangulation · Jan Schmidt, Centricular
  • Chromium GStreamer backend · Julien Isorce, Samsung
  • ogv.js: bringing open codecs to Safari and IE with emscripten · Brion Vibber, Wikimedia
  • Bringing GStreamer to Radio Broadcasting · Marcin Lewandowski
  • Daala and NetVC: the next-generation of royalty free video codecs · Thomas Daele, Mozilla
  • Profiling individual GStreamer elements (*) · Kyrylo Polezhaiev
  • Pointing cameras at TVs: when HDMI video-capture is not an option · Will Manley, stb-tester
  • decodebin3: designing the next generation playback engine (*) · Edward Hervey, Centricular
(*) preliminary title

Lightning Talks:

  • Hyperspectral imagery · Dimitrios Katsaros, QTechnology
  • Industrial application pipelines · Dimitrios Katsaros, QTechnology
  • gst-gtk-launch-1.0 · Florent Thiery, Ubicast
  • liborc (JIT SIMD generator) experiments · Wim Taymans, RedHat
  • V4L2 GStreamer elements update · Nicolas Dufresne, Collabora
  • Analyzing caps negotiation with GstTracer · Thiago Sousa Santos, Samsung
  • Know your queues! queue, queue2, multiqueue, netbuffer and all that · Tim-Philipp Müller
  • Nle: A new design for the GStreamer Non Linear Engine · Thibault Saunier
  • What is new in GstValidate · Thibault Saunier
  • Continuous Integration update · Edward Hervey
  • Remote GStreamer Debugger · Marcin Kolny
  • gstreamermm C++ wrapper · Marcin Kolny
  • Multipath RTP (MPRTP) plugin in GStreamer · Balázs Kreith
  • OpenCV and GStreamer · Vanessa Chipi
  • ...
  • Submit your lightning talk now!

Full talk abstracts and speaker biographies will be published shortly.

Many thanks to our sponsors, Google, Centricular and Pexip without whom the conference would not be possible in this form. And to Ubicast who will be recording the talks again.

Considering becoming a sponsor? Please check out our sponsor brief.

We hope to see you all in Dublin in October! Don't forget to register!

September 03, 2015 06:00 PM

September 02, 2015

Libre Music Production - Articles, Tutorials and News

LMP Asks #12: An interview with Sebastian Posch

This month LMP talked to Sebastian Posch, Linux enthusiast and guitar teacher who likes to incorporate Linux into his teaching sessions.

Hi Sebastian and thank for you for taking the time to do the interview. Where do you live, and what do you do for a living?

by Conor at September 02, 2015 09:20 AM

August 30, 2015


New HQ

New HQ

Its been a bit quiet here on OpenAV recently – so its time for an update. OpenAV has moved its primary development location – we’re now in Limerick city in the west of Ireland. This means there’s much more opportunities to meet fellow-minded musicians, and get direct feedback on Linux Audio, how OpenAV software works. In the near future we… Read more →

by harry at August 30, 2015 08:14 PM

GStreamer News

New OS X build (

New builds of the 1.5.90 release candidate packages for OS X have been uploaded. This build fixes a problem that made the first build unusuable, but contains no source changes. The new binaries can be found here

August 30, 2015 12:00 AM

August 29, 2015

Moved to WordPress

Took the plunge and migrated my blog to WordPress. Thanks to this PHP script it wasn’t that much work. Some issues with titles that contained the every odd character but other than that the migration went pretty smooth.

Hopefully this will revive my blogging spirit a bit. PivotX was sometimes pretty cumbersome to work with, I had to manually edit the HTML just a tad bit too often. And editing the HTML was quite tedious, it opened a new window with its own save button, not very fun to work with. Now I can simply toggle between Visual and Text. Other than that WordPress simply has more to offer, like plug-ins and themes that you can install on the fly. Being a big CMS has its drawbacks too of course. Installed Wordfence and Disable XML-RPC to keep out the bad guys. Hopefully it doesn’t get that bad that I have to resort to solutions like fail2ban. We’ll see.

The post Moved to WordPress appeared first on

by jeremy at August 29, 2015 08:00 PM

August 27, 2015

Pid Eins

systemd.conf 2015 CfP REMINDER

LAST REMINDER! systemd.conf 2015 Call for Presentations ends August 31st!

Here's the last reminder that the systemd.conf 2015 CfP ends on August 31st 11:59:59pm Central European Time (that's monday next week)! Make sure to submit your proposals until then!

Please submit your proposals on our website!

And don't forget to register for the conference! Only a limited number of registrations are available due to space constraints! Register here!.

For further details about systemd.conf consult the conference website.

by Lennart Poettering at August 27, 2015 10:00 PM

Create Digital Music » open-source

Get physical modeling sonic powers, free, in Max starter kit


There is a powerful world of sound exploration in your hands. But sometimes the hardest part is just starting.

So the quiet launch of a site called Maxology is very good news. It’s evidently a place to go for tutorials and projects and more.

And right now, you can grab a bunch of free and open source objects for physical modeling, built for Max 7 and Max for Live. That opens a window into a world of realistic and impossible sounds, built on algorithms that mimic the way instruments work physically and acoustically.

The Percolate Objects Starter Kit is a reissue of one of the classic libraries for this form of synthesis, updated and refreshed and newly documented, even with tutorials for beginners. Percolate is something special – it’s built from the Synthesis Toolkit by legendary synth scientist Perry R. Cook with Gary Scavone, adapted by the also-legendary Dan Trueman (pioneer of the laptop orchestra, by many accounts) and R. Luke Dubois (pioneer of lots of other things). And it covers a range of techniques – physical modeling, modal, and PhISM, for those of you who are aficionados of these things, are all there.

Together, you can built realistic-sounding instruments, wild new instruments and experimental sounds, and effects.

What does it sound like? Well, kind of like whatever you want – but here’s one example, by axxonn:

Produced using only the following; Two instances of Gen Random Synth, 909 Samples in Gen Wave Synth, Scrub Face Delay and Reverb.

These devices are all made by Tom Hall using objects from the PeRColate collection, recently updated and made available by Maxology (including the MFL devices) for Max7.

There’s a bunch of stuff there for free. (Max 7 isn’t free, but recently-adjusted pricing and subscriptions – plus the inclusion of Max for Live – mean that price of entry isn’t so prohibitive, given the amount of value that’s there. And see my note about Pd below; I’m researching.)

For Max 7:
1. PerCOlate objects
2. Starter patches
3. Full help documentation
4. Tutorials
5. A pitchtracker, so you can try playing along with real instruments, too

For Max for Live:
1. A wavetable synth with built-in randomness
2. A wavetable generator
3. A granulator, for transposition and special effects
4. A scrubbing delay-line effect

And because it’s all built in Max, you can combine objects modular-style to build your own special instruments. In fact, while I love modular hardware, a lot of what you do with a physical modular is really inter-connecting boxes that are already built for you. Working with Max in this way allows you to go much deeper, if you so choose, and really get deep into the logic and construction of what you’re doing.

I don’t think one approach is better than another; they’re just different. But I think maybe the reason people haven’t played so much with this sort of digital depth is that it does require a little more learning – and this sort of complete documentation can at last make it friendly for those of you ready to embark on that adventure.

For more:

Physical modeling primer for Max Users by Gregory Taylor
Physical Modeling Explained by Martin Russ

Also, since the objects themselves are open source, I’d love to see them ported to Pd. Max is a very friendly desktop environment and has this unique Ableton Live integration, but then also having Pd opens up things like developing physical instruments on mobile devices.


Don’t miss Starter Kit #1, either – a computer vision library that updates some classic visual tools in Jitter:


The post Get physical modeling sonic powers, free, in Max starter kit appeared first on Create Digital Music.

by Peter Kirn at August 27, 2015 04:29 PM

August 24, 2015

Pid Eins

First Round of systemd.conf 2015 Sponsors

First Round of systemd.conf 2015 Sponsors

We are happy to announce the first round of systemd.conf 2015 sponsors!

Our first Silver sponsor is CoreOS!

CoreOS develops software for modern infrastructure that delivers a consistent operating environment for distributed applications. CoreOS's commercial offering, Tectonic, is an enterprise-ready platform that combines Kubernetes and the CoreOS stack to run Linux containers. In addition CoreOS is the creator and maintainer of open source projects such as CoreOS Linux, etcd, fleet, flannel and rkt. The strategies and architectures that influence CoreOS allow companies like Google, Facebook and Twitter to run their services at scale with high resilience. Learn more about CoreOS here, Tectonic here, or follow CoreOS on Twitter @coreoslinux.

A Bronze sponsor is Codethink:

Codethink is a software services consultancy, focusing on engineering reliable systems for long-term deployment with open source technologies.

A Bronze sponsor is Pantheon:

Pantheon is a platform for professional website development, testing, and deployment. Supporting Drupal and WordPress, Pantheon runs over 100,000 websites for the world's top brands, universities, and media organizations on top of over a million containers.

A Bronze sponsor is Pengutronix:

Pengutronix provides consulting, training and development services for Embedded Linux to customers from the industry. The Kernel Team ports Linux to customer hardware and has more than 3100 patches in the official mainline kernel. In addition to lowlevel ports, the Pengutronix Application Team is responsible for board support packages based on PTXdist or Yocto and deals with system integration (this is where systemd plays an important role). The Graphics Team works on accelerated multimedia tasks, based on the Linux kernel, GStreamer, Qt and web technologies.

We'd like to thank our sponsors for their support! Without sponsors our conference would not be possible!

We'll shortly announce our second round of sponsors, please stay tuned!

If you'd like to join the ranks of systemd.conf 2015 sponsors, please have a look at our Becoming a Sponsor page!

Reminder! The systemd.conf 2015 Call for Presentations ends on monday, August 31st! Please make sure to submit your proposals on the CfP page until then!

Also, don't forget to register for the conference! Only a limited number of registrations are available due to space constraints! Register here!.

For further details about systemd.conf consult the conference website.

by Lennart Poettering at August 24, 2015 10:00 PM

Vee One Suite 0.7.1 - A seventh-bis beta release

Hello again,

The Vee One Suite aka. the gang of three old-school software instruments, featuring synthv1, as one polyphonic synthesizer, samplv1, a polyphonic sampler and drumkv1, as one drum-kit sampler, which is getting a small but pertinent bug-fix, are all making up for the so called Summer'15 release frenzy (extended edition;)).

All made available in dual form:

  • a pure stand-alone JACK client with JACK-session, NSM (Non Session management) and both JACK MIDI and ALSA MIDI input support;
  • a LV2 instrument plug-in.


  • Fixed a recent bug/mistake that was causing a complete reset/revert of all element parameters to prior values upon loading an element sample file. (applies to drumkv1 only)
  • Improved Qt4 vs. Qt5 configure builds (via qmake).

The Vee One Suite are free and open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

And here they go again!

synthv1 - an old-school polyphonic synthesizer

synthv1 0.7.1 (seventh-bis official beta) is released!

synthv1 is an old-school all-digital 4-oscillator subtractive polyphonic synthesizer with stereo fx.


Flattr this

samplv1 - an old-school polyphonic sampler

samplv1 0.7.1 (seventh-bis official beta) is released!

samplv1 is an old-school polyphonic sampler synthesizer with stereo fx.


Flattr this

drumkv1 - an old-school drum-kit sampler

drumkv1 0.7.1 (seventh-bis official beta) is released!

drumkv1 is an old-school drum-kit sampler synthesizer with stereo fx.


Flattr this

Enjoy && have (lots of) fun ;)

by rncbc at August 24, 2015 06:00 PM

Linux Audio Announcements -

[LAA] Vee One Suite 0.7.1 - A seventh-bis beta release

Hello again,

The 'Vee One Suite' aka. 'the gang of three old-school' software
instruments, featuring synthv1 [1], as one polyphonic synthesizer,
samplv1 [2], a polyphonic sampler and drumkv1 [3], as one drum-kit
sampler, which is getting a small but pertinent bug-fix, are all making
up for the so called 'Summer'15 release frenzy (extended edition;)').

All made available in dual form:

- a pure stand-alone JACK [4] client with JACK-session, NSM [5] (Non
Session management) and both JACK MIDI and ALSA [6] MIDI input support;
- a LV2 [7] instrument plug-in.

- Fixed a recent bug/mistake that was causing a complete reset/revert of
all element parameters to prior values upon loading an element sample
file. (applies to drumkv1 [3] only)
- Improved Qt4 vs. Qt5 configure builds (via qmake).

The Vee One Suite are free and open-source Linux Audio [9] software,
distributed under the terms of the GNU General Public License (GPL) [8]
version 2 or later.

And here they go again!

**synthv1 - an old-school polyphonic synthesizer [1]**

synthv1 0.7.1 (seventh-bis official beta) is released!

synthv1 is an old-school all-digital 4-oscillator subtractive
polyphonic synthesizer with stereo fx.




- source tarball:

- source package:

- binary packages:

**samplv1 - an old-school polyphonic sampler [2]**

samplv1 0.7.1 (seventh-bis official beta) is released!

samplv1 is an old-school polyphonic sampler synthesizer with stereo fx.




- source tarball:

- source package:

- binary packages:

**drumkv1 - an old-school drum-kit sampler [3]**

drumkv1 0.7.1 (seventh-bis official beta) is released!

drumkv1 is an old-school drum-kit sampler synthesizer with stereo fx.




- source tarball:

- source package:

- binary packages:


[1] synthv1 - an old-school polyphonic synthesizer

[2] samplv1 - an old-school polyphonic sampler

[3] drumkv1 - an old-school drum-kit sampler

[4] JACK Audio Connection Kit

[5] NSM, Non Session Management

[6] ALSA, Advanced Linux Sound Architecture

[7] LV2, Audio Plugin Standard, the extensible successor of LADSPA

[8] GNU General Public License


See also:

Enjoy && have (lots of) fun ;)
rncbc aka. Rui Nuno Capela
Linux-audio-announce mailing list

by at August 24, 2015 03:20 PM

August 22, 2015

Libre Music Production - Articles, Tutorials and News

Find presets, synth patches, sample libraries with new website, Musical Artifacts

New website,, has just launched a beta version. The website brings together presets, synth patches, sample libraries, etc all in one place for users to browse and download. You can also submit 'artifacts' of your own to the website for other people to discover.

by Conor at August 22, 2015 05:45 PM

Linux Audio Announcements -

[LAA] [ANN] A website for pre-sets, soundfonts, and more

Hey everyone, this is a project I've been working on for some months now
and now I think it's ready for some beta testing. is a place to collect free 'musical
artifacts', that is, pre-sets, configuration files, soundfonts, etc. make
them searchable and taggable and give credit to authors.

For example here are some soundfonts, zyn pre-sets and guitarix tones:

I've started populating the application with some artifacts and will
continue doing so in the future. Also the application is in BETA right, so
new features are coming and some bugs are to be expected!

You can look at the source, contribute or file bugs via github

That's it!

by at August 22, 2015 02:45 PM

August 21, 2015

Talk Unafraid

The Dark Web: Guidance for journalists

We had a lot of coverage of “the dark web” with the latest Ashley Madison leak coverage. Because a link to a torrent was being shared via a Tor page (well, nearly – actually most people were passing around the Tor2Web link), journalists were falling over themselves to highlight the connection to the “dark web”, that murky and shady part of the internet that probably adds another few % to your click-through ratios.

So many outlets and journalists – even big outfits like BBC News and The Guardian – got their terminology terribly wrong on this stuff, so I thought I’d slap together some guidance, being somewhat au fait with the technology involved. Journalists are actually most of the reason why these sorts of tools exist in the first place, in fact – if that surprises you, read on…

The Dark, Deep Internet

What the hell is “the dark web” anyway? Why is it different from the “deep web”? Why, for that matter, does it differ from the “web”?

First up, to clarify: “the dark web” and “darknets” are practically the same thing, and the terms are used interchangeably.

So: The Deep Web and The Web are technically the same. People often refer to the deep web when they are referring to websites (that is, sites on the internet) that are hard to find with normal search engines because they are not linked to in public. Tools like Google depend on being able to follow a chain of links to find a website – if there’s no links that Google can see, it’s not going to get into the Google index, and so will not be searchable. These sites are still on the internet, though, and anyone who is given the link can put that in a perfectly normal browser and reach that site.

The Dark Web, however, refers to a different technical domain. Dark web or “darknet” sites are only reachable using a tool that encrypts and re-routes your traffic, providing a degree of anonymity. These tools we typically call “anonymity networks”, or “overlay networks”, as they run on top of the internet’s infrastructure. You need to be a part of this network to be able to reach content in the “dark web”. The dark web refers to lots of different tools – Tor is the most widely known, but isn’t all about the dark web, as we’ll learn shortly. I2P and Freenet are two other well-known examples of overlay networks. It’s worth noting that these networks don’t interoperate – the Tor darknet can’t talk to the I2P darknet, as they use radically different technical approaches to achieve similar results.

The Onion Router, Clearnet and Darknet

Map from the Oxford Internet Institute showing Tor usage across the worldMap from the Oxford Internet Institute showing Tor usage across the world

Tor (The Onion Router) is a peer to peer, distributed anonymization network that uses strong cryptography and many layers of indirection to route traffic anonymously and securely around the world. Most people using Tor are using it as a proxy for “clearnet” sites; others use it to access hidden services. It’s by far the most popular darknet.

From a darknet perspective, clearnet is the real internet, the world wide web we all know and love. The name refers to the fact that information on the clearnet is sent “in the clear”, without any encryption built into the network protocols (unlike darknets, where encryption is built into the underlying network).

Tor is a technical tool, and is used primarily as a network proxy. To use Tor a client is installed, which will connect to the network. This same client can optionally relay traffic from other clients, expanding the network. As of this post there are about 6500 relays in the Tor network, and 3000 bridges – these bridges are not publicly listed, making it hard for hostile governments to block them, and so allowing users in hostile jurisdictions to connect to the network.

The Tor project also provides the Tor Browser Bundle, which is a modified version of Firefox ESR (Extended Support Release) that contains a Tor client and is configured to prevent many de-anonymization attacks that focus on exploiting the client (for instance, forcing non-Tor connections to occur to a site under the attacker’s control using plugins like Flash or WebRTC, allowing correlation between Tor and clearnet traffic to identify users). This is the recommended way to use Tor for browsing if you’re not using TAILS.

TAILS is a project related to Tor that provides a “live system” – a complete operating system that can be started and run from a USB stick. TAILS stands for The Amnesiac Incognito Live System – as the name suggests, it remembers nothing, and does all it can to hide you and your activity. This is by far the most robust tool if you’re aiming to protect your activity online, and is used widely by journalists across the world, as it’s easy to take with you and hide – even in very hostile environments.

Hiding from the censors

On the internet it’s reasonably easy to find out where a website is hosted, who’s responsible for it, and from there it’s easy for law enforcement to shut it down by contacting the hosts with the right paperwork. It’s also normally quite easy from that point to find out who was running a website and go after them, though there’s plenty of zero-knowledge hosts out there who will accept payment in cash or Bitcoin, ask no questions and so on.

There’s another facet to this – if you’re a government trying to block websites, it’s very easy to look at traffic and spot traffic destined for somewhere you don’t like, and either block it or modify the contents (or simply observe it). This is common practice in countries like Iran, China, Syria, Israel, and quite a lot of the reason why Tor exists – the adoption of this filtering technology by countries like the UK, ostensibly to prevent piracy, limit hate speech or “radical/extremist views”, or to protect children, is driving Tor adoption in the west, too.

Hidden services (and while Tor is the most commonly cited example, other networks support similar functionality) effectively use the same approach they use to hide the origin of traffic destined for the clearnet to hide both the origin and source of traffic between a user and a hidden service. Unless the hidden service itself offers a clue as to its owners or location, then users of that service can’t identify where that hidden service is operated from. Likewise, the operators of the hidden service can’t see where their users come from. Traffic between the two ends meets in the middle at a randomly picked rendezvous point, which also has no knowledge of what’s being transferred or where it’s come from or going to.

This allows for the provision of services within the darknet entirely, removing the need for the clearnet. This has many advantages – mainly, if your Tor exit node for a session happens to be in Russia, you’re likely to see Russian censorship as your traffic leaves Tor and enters the clearnet. If your traffic never reaches the clearnet, government censorship is unable to view and censor that traffic. It’s also very hard for governments monitoring darknets to reach out and shut down sites that are hosted in their jurisdiction – because they don’t know which sites are in their jurisdiction.

Increasingly, legitimate sites have started to offer hidden service mirrors or proxies, allowing Tor users to browse their content without leaving the network. Facebook, ironically, was one of the first major sites to offer this, targeting users in jurisdictions where network tampering is common. The popular search engine DuckDuckGo is another example.

Designed for criminals, or just coincidentally useful?

Of course, there are some criminal users of these networks – just as there are criminal users of the internet, and criminal users of the postal service, and criminal users of road networks. But was Tor made for criminal purposes?

Short answer, no. The long answer is still no – Tor was originally developed by the United States Naval Research Laboratory, and development has been subsequently funded by a multitude of sources, mostly related to human rights and civil liberty movements, including the US State Department’s human rights arm. Broadcasters increasingly fund Tor’s development as they try and find new ways to reach markets traditionally covered by border-spanning shortwave broadcasts. You can read up on Tor’s sponsors here.

The point is, Tor and other networks like I2P and Freenet were never designed with criminals in mind, but rather with strong anonymity and privacy in mind. These properties are technical, and define how the tool is designed and developed. These properties are vital for the primary users of these tools, and are intrinsically all-or-nothing.

This is an important point, and one that crops up again and again in both discussions of Tor and when discussing things like government interception of encryption, or “banning” encryption unless it’s possible for the government to subvert it “in extremis“, as has been called for numerous times by the UK government, to give one example.

On a technical level, and a very fundamental one at that, one cannot make a tool that is simultaneously resistant to government censorship and traffic manipulation/interception and also permits lawful intercept by law enforcement authorities, because these networks span borders, and one person’s lawful intercept is another person’s repressive government. There is a lot of technical literature out there on why this is an exceptionally hard problem and practically infeasible, so I won’t go  into detail on this. However, key escrow (the widely accepted “best” approach – though still highly problematic) has been attempted in the past by the NSA and the Clipper chip – and it failed spectacularly.

From Clipper chip die, implementing the short-lived SKIPJACK cipher and key escrow functionality, allowing in theory only the US Government to intercept and decrypt traffic. Within 3 years it had been comprehensively broken and abandoned.

These properties of anonymity and security also make the services attractive to certain types of criminals, of course, but in recent reports such as this one from RAND on DRL (US State Dept) funded Tor development, the general conclusion is that Tor doesn’t help criminals that much, because there’s better tools out there for criminal use than Tor:

There is little reported evidence that the Internet freedom tools funded by DRL [ie: Tor] assist illicit activities in a material way, vis-à-vis tools that predated or were developed without DRL funding…

… given the wealth and diversity of other privacy, security, and social media tools and technologies, there exist numerous alternatives that would likely be more suitable for criminal activity, either because of reduced surveillance and law enforcement capabilities, fewer restrictions on their availability, or because they are custom built by criminals to suit their own needs – RAND Corporation report

Law enforcement efforts to shut down darknet sites like Silk Road (and its many impersonators – there are by some estimates now several hundred sites like it that sprung up in the aftermath of its shutdown) tend to focus on technical vulnerabilities in the hidden service itself – effectively breaking into the service and forcing it to provide more information that can be used to identify it. Historically, however, most darknet site takedowns have been social engineering victories – where the people running a site are attacked, rather than the site itself.


I hope the above is useful for journalists and others trying to get a basic understanding of these tools beyond using scary terms like “the dark web” in reports without really knowing what that means. If you want to find out more then the links below are a good starting point.

by James Harrison at August 21, 2015 01:16 PM

GStreamer News

Ubuntu Studio » News

Your chance to help – Beta Testing

If you would like to lend a hand to the volunteer project Ubuntu Studio, this is the perfect time. It’s Beta testing time! You’ll need to at least get yourself an account at, and subscribe to our devel mail list in order to assist. Read more about how to do testing in this post […]

by Kaj Ailomaa at August 21, 2015 08:06 AM

August 20, 2015

Linux Audio Announcements -

[LAA] [ANN] Virtual MIDI Piano Keyboard (VMPK) 0.6.1 released

Virtual MIDI Piano Keyboard is a MIDI events generator and receiver. It
doesn't produce any sound by itself, but can be used to drive a MIDI
synthesizer (either hardware or software, internal or external). You can use
the computer's keyboard to play MIDI notes, and also the mouse. You can use
the Virtual MIDI Piano Keyboard to display the played MIDI notes from
another instrument or MIDI file player.
The precompiled packages include the GeneralUser GS SoundFont by S.Christian
Collins ( ready to use
with the FluidSynth output driver (also included in these packages, providing
beautiful sounds out of the box).
Changes for v0.6.1:
* Fixes for ALSA (Linux) and Windows input drivers,
(provided by Drumstick 1.0.1 libraries)
* Packaged using the Qt Frameworks 5.5.0
* Fixed ticket #27: save keyboard maps with default xml extension
* Fixed ticket #29: display input event noteon with velocity=0 as noteoff
* Color palette management fixes
* Updated Russian and Serbian translations
Compilation minimum requirements for all platforms: CMake 3.0, Qt 5.1 and
Drumstick 1.0 or later.
Please use the mailing list <> for questions
and comments. Thanks.
Copyright (C) 2008-2015, Pedro López-Cabanillas and others
License: GPL v3
More info
Linux-audio-announce mailing list

by at August 20, 2015 09:17 PM

[LAA] [ANN] Drumstick libraries 1.0.1 released

Drumstick is a set of MIDI libraries using C++/Qt5 idioms and style. Includes
a C++ wrapper around the ALSA library sequencer interface: ALSA sequencer
provides software support for MIDI technology on Linux. A complementary
library provides classes for processing SMF (Standard MIDI files: .MID/.KAR),
Cakewalk (.WRK), and Overture (.OVE) file formats. A multiplatform realtime
MIDI I/O library is also provided with ALSA, OSS, Windows, Mac OSX, Network
and FluidSynth direct output backends.

Changes for v1.0.1
* RT library: Fix for ticket #4: ALSA Midi Input not working
* RT library: Fixed windows midi input

Compilation minimum requirements for all platforms: CMake 3.0 and Qt 5.1

Copyright (C) 2009-2015, Pedro Lopez-Cabanillas
License: GPL v2 or later

Project web site

Online documentation


Linux-audio-announce mailing list

by at August 20, 2015 09:13 PM

Libre Music Production - Articles, Tutorials and News

Run Pure Data patches inside your DAW

Oliver Greschke recently unveiled PD Pulp, a plugin that lets you import your own Pure Data patch files and run them as a plugin. PD Pulp also provides you with 10 controlable parameters.

by Conor at August 20, 2015 05:23 PM

August 19, 2015

GStreamer News

GStreamer Core, Plugins, RTSP Server 1.6.0 release candidate (1.5.90)

The GStreamer team is pleased to announce the first release candidate for the stable 1.6 release series. The 1.6 release series is adding new features on top of the 1.0, 1.2 and 1.4 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework. The final 1.6.0 release is planned in the next few days unless any major bugs are found.

Binaries for Android, iOS, Mac OS X and Windows will be provided separately by the GStreamer project.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, or gst-rtsp-server, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, or gst-rtsp-server,

Check the release announcement mail for details and the release notes above for a list of changes.

August 19, 2015 02:29 PM

August 18, 2015

Pid Eins

systemd.conf 2015 Call for Presentations

REMINDER! systemd.conf 2015 Call for Presentations ends August 31st!

We'd like to remind you that the systemd.conf 2015 Call for Presentations ends on August 31st! Please submit your presentation proposals before that data on our website.

We are specifically interested in submissions from projects and vendors building today's and tomorrow's products, services and devices with systemd. We'd like to learn about the problems you encounter and the benefits you see! Hence, if you work for a company using systemd, please submit a presentation!

We are also specifically interested in submissions from downstream distribution maintainers of systemd! If you develop or maintain systemd packages in a distribution, please submit a presentation reporting about the state, future and the problems of systemd packaging so that we can improve downstream collaboration!

And of course, all talks regarding systemd usage in containers, in the cloud, on servers, on the desktop, in mobile and in embedded are highly welcome! Talks about systemd networking and kdbus IPC are very welcome too!

Please submit your presentations until August 31st!

And don't forget to register for the conference! Only a limited number of registrations are available due to space constraints! Register here!.

Also, limited travel and entry fee sponsorship is available for community contributors. Please contact us for details!

For further details about the CfP consult the CfP page.

For further details about systemd.conf consult the conference website.

by Lennart Poettering at August 18, 2015 10:00 PM

August 14, 2015

Libre Music Production - Articles, Tutorials and News

Ardour 4.2 arrives, bug fixes galore

The Ardour devs have just announced version 4.2, codenamed "Taking Tiger Mountain". This release is primarily a bug fix release.

Notably, the audio/MIDI IO backend for Windows has been replaced with completely new code. Other areas that have been concentrated on for bug fixes include plugins, automation, video and MIDI control surfaces.

For full details on changes in this release, check out the announcement over at

by Conor at August 14, 2015 08:17 PM

August 13, 2015


Ardour 4.2 released

The Ardour project is pleased to announce the release of 4.2. This is primarily a bug fix release, but the list of fixes is long, and we've also replaced the audio/MIDI IO backend for Windows with completely new code which we think will address some of the issues faced on that platform. This release also sees the return of downloads for Apple PowerPC platforms.

There are other exciting changes waiting in the wings for 4.3, but we're already slightly off our monthly release schedule, and this long list of fixes merits a release of its own.

read more

by paul at August 13, 2015 06:27 PM

Blog move

Just a quick note to say that I’ve moved this blog from a shared server at Quickhost to my own VPS at Cyso. So if you run into things that don’t seem right please let me know! I’m also playing with the idea of migrating this blog from PivotX to WordPress. Not that I’m a big fan of WordPress, on the contrary, but it seems PivotX is becoming a bit of a cul-de-sac.

The post Blog move appeared first on

by jeremy at August 13, 2015 09:05 AM

August 12, 2015

Create Digital Music » Linux

Here are ten reasons Reaper 5 upgrade will make users happy


Reaper 5 is out today. It’s the compact, tight, powerful music and audio production software whose users would like to know why more of you aren’t talking about it.

And they have a point. Reaper 5 is US$60 with a bunch of included free upgrades, or a voluntary $225 for “commercial” use. Even the demo runs a full 60 days with no restrictions. Yet Reaper does a lot of things other DAWs don’t – even some of the priciest out there – in a compact tool that has exhaustive hardware and OS support, plus complete scripting.

Now, what Reaper 5 doesn’t have is some easy way of describing in marketing terms. There’s actually not a single sort of “banner” feature. It’d be easier to say that Reaper 5 does what the earlier versions of Reaper does, but “more better.” And so knowing how passionate Reaper users are, I’d love to hear what you care about most.

Also, the simple answer to why more people don’t talk about Reaper is simple. Reaper users love it because the software does stuff other people don’t necessarily care enough about. Unfortunately, some of those people don’t care enough about it to … uh, use Reaper.

But don’t let the nerdiness turn you off. This is a great DAW at a kind of insanely-low, don’t tell your accountants price.

And I can sum up what I think are version 5’s most significant overall improvements:

1. It’ll make you happy if you use video. Support for adding videos to projects is a big feature of Reaper, and now it’s massively improved, including powerful features for decoding and displaying video with high resolution, high performance playback.

2. It has an entire script development environment, built in. Okay, this is pretty geeky, but developers get richer-than-ever options for Lua scripting right in the DAW – including their own IDE. If you don’t code, the upshot is, the people who do can do it more quickly and reliably – and then you can use their scripts to save time. There are tons of API additions, too.

3. It handles multichannel media really well. This lets you edit more easily with formats like Ambisonics.

4. It’s insanely powerful at automation. Automation is recorded per take, and now includes various performance enhancements. It’s sample-accurate with VST3 and JSFX. (We have black MIDI; maybe black VST can be a thing?) All of this can be managed from the Project Bay, too.

5. It’ll keep time however you like. Custom metronome beat patterns ticks away as you want, and a ruler can now accurately display time signature, tempo, and highly accurate video frame info.

6. It’s got a prettier theme. More theme customization options, too.

7. You can group controls. Link track controls wherever you want in the signal flow.

8. It adds MIDI control. MIDI note off velocity is editable, and there are new options for more precisely editing node edges with the mouse.

9. It’s faster and more efficient. There are performance improvements everywhere. I could go into them, but they’re boring to write about, so instead I’ll do what they do and save you time.

10. It doesn’t abandon older OSes. Okay, that’s not an upgrade – but it’s the absence of a downgrade. And in an industry where this is increasingly uncommon, you can run Reaper all the way back to XP on Windows, or 10.5 on OS X. (Note that the same can’t be said of all the plug-in formats and plug-ins, but still.) It also plays nice under WINE, so you can run it under Linux even though there’s not a Linux native version.

Video support (for film/TV scoring, for instance) is a major difference between Reaper and PreSonus’ Studio One, as mentioned before. So, too, is scriptability. So while I do admire Studio One, those could be deciding points from some readers, as we heard in comments.

And Reaper still does the stuff it already did well. That includes loads of multichannel and routing features (including real surround support), lots of nice built-in effects, modulation features, and OSC support for easy control. And it’s small enough to put on a portable drive, so you can take it with you to someone else’s studio.

But you don’t have to take my word for it. You can try it for two months free and see if it makes you happy.

And for more, turn to the founder of developer Cockos.

Justin Frankel isn’t just an important name in the world of DAWs. He has possibly the most unique resume in the business, as the man behind Winamp and gnutella (kids, ask your Gen X parents about that file sharing service), not to mention making a crucifix-shaped programmable DSP platform called Jesusonic.

Seriously, the number of people who have both sold a company to AOL and made a big messiah-themed effects platform are … one.

He spoke to our friends at SonicScoop, wearing a possibly Jesus-ish beard.

And talks about what makes the tool special:

And has

And for more:

Meanwhile, the roots of that Jesusonic remain in Reaper. I just hope for Reaper hardware. Because:


And if you do want to learn scripting:

The post Here are ten reasons Reaper 5 upgrade will make users happy appeared first on Create Digital Music.

by Peter Kirn at August 12, 2015 10:23 PM

Create Digital Music » open-source

Ninja Tune’s remix, creation app on Android after 300k iOS downloads

Ninja Jamm Android-07a-Play Matrix mixed clip and drill

Before Ableton Live, before VJ apps, the AV act Coldcut were already making their own software for remixed audiovisual performance. Now, with the Ninja Tune label they founded, Matt Black is still championing the notion of performance that goes beyond pressing play.

I’ve never seen anyone pick up Ninja Jamm and not immediately fall in love with it. It’s just a tremendous amount of fun working with the built-in effects and quick access to bits and pieces of music.

The likes of Amon Tobin, Bonobo, and Roots Manuva are there, with a variety of genres. There’s also Loopmasters sound content, so you can make your own tunes from loops, too.

Now, as everyone else debates playback apps and streams, will listeners embrace more active “performance” of music? That’s yet to be seen. The app itself, perhaps a bit of a slow burn at the first, has gradually racked up 300,000 downloads on iOS. And now, it comes to Android.

It’s free; you purchase the content you want in-app.

In fact, it’s interesting to watch Native Instruments push Stems – Ninja Jamm was already on iOS showing four stems per track in this app. (Ninja is not yet on NI’s label list at the moment; I’ll ask them about that.)

Have at it on your gadget of choice:

Google Play


There’s also a remix contest on:

— with some nice prizes and Roota Manuva as the starting point.

You have to admire what Ninja are able to do here. They have the app and the artists, making a complete experience for their fans.

If you do use this in some way, we’d love to know what you think.

Side note: the app makes use of two open source frameworks that make cross platform compatibility practical, OpenFrameworks and libpd.

The post Ninja Tune’s remix, creation app on Android after 300k iOS downloads appeared first on Create Digital Music.

by Peter Kirn at August 12, 2015 03:44 PM

August 10, 2015

Libre Music Production - Articles, Tutorials and News

LMP Asks #11: An interview with David Robillard, aka drobilla

This month LMP talked to David Robillard, long time member of the community and author of several FLOSS-projects related to Linux audio. One of David's projects is LV2, the current standard for Linux audio plugins, and the successor of the old standard LADSPA.

Where do you live, and what do you do for a living?

by Gabbe at August 10, 2015 07:40 PM

August 08, 2015

Libre Music Production - Articles, Tutorials and News

August 01, 2015

Libre Music Production - Articles, Tutorials and News

Guitarix 0.33.0 released

The Guitarix developers have just announced a new release, version 0.33.0. This release sees a number of new plugins added to the virtual guitar amp simulator. These include -

A new Wah plugin with manual/auto/alien mode and the following emulated wah wah's to select from -

  • Colorsound Wah
  • DallasArbiter Wah
  • Foxx Wah
  • JEN Wah
  • Maestro Boomer Wah
  • Selmer Wah
  • Vox Wah

by Conor at August 01, 2015 09:47 AM

Linux Audio Announcements -

[LAA] Guitarix 0.33.0 release

Release 0.33.0 is out,

Guitarix is a tube amplifier simulation for
jack (Linux), with an additional mono and a stereo effect rack.
Guitarix includes a large list of plugins[*] and support LADSPA / LV2
plugs as well.

The guitarix engine is designed for LIVE usage, and feature ultra fast,
glitch and click free, preset switching and is full Midi (learn)
and remote (Web-interface/ GUI) controllable (bluez / avahi)


This release comes with the old User Interface and reflect only changes
in the plugin section.

New Plugins in guitarix

A new Wah plugin with manual/auto/alien mode and the following emulated
wah wah's to select from:

Colorsond Wah
DallasArbiter Wah
Foxx Wah
Maestro Boomer Wah
Selmer Wah
Vox Wah

A new Fuzz section with emulations of the:

Astrotone Fuzz
Dunlop Fuzzface
Fuzzface Roger Mayer Mods
Fuzzface Fuller Mods
Screaming Bird
Colorsound Tonebender
Vintage Fuzzmaster
Fat Furry Freak
Fuzz Drive

and emulations of:

LPB-1 Linear power Booster
High Frequency Brightener
Hogs Foot
Dallas Rangemaster
Buffer Booster
Transistor Buffer
Colorsound Overdrive

Guitarix is free, open-source software, distributed under the terms of
the GNU General Public License (GPL) version 2 or later.

Please refer to our project page for more information:

Download Site:

Linux-audio-announce mailing list

by at August 01, 2015 06:29 AM

July 31, 2015

Linux Audio Announcements -

[LAA] rkr lv2 Beta 0 released

HI all!

Would you like 40 more effect plugins?

After many months of persistence I have ported the Rakarrack effects to lv2
format. With each effect, the individual effect presets have been ported as
well. I am very excited about this project and hope that this can keep the
Rakarrack project as one of the highlights of linux audio. Rakarrack is
currently in need of a proper maintainer. So if you are interested please
contact the developers! I made these ports such that all changes could be
ported back into the main codebase.

The porting took a lot of manual entry of matching parameter bounds and
indices etc.. So as you test and use these, please report any strange
behaviour you notice (such as changing the "gain" parameter actually
changed the decay time, etc.). I will be trying to test these but it will
take me quite some time to give them all a good shake down.Current status
is that they all can be loaded by Carla and Jalv, with some audio testing
on a few by me, and some testing by other parties. No known issues
currently exist.

I am currently working on porting the preset banks to Carla rack presets.
This will allow us to load any of the presets from Rakarrack and insert it
as a single plugin in Ardour or other hosts. I would like to get the
community started testing these in parallel with my effort. Once I have
that done I will announce a second beta.

Not all of the effects were ported, since some were duplicate code, direct
copies of ladspa plugins, or redundant of other lv2 plugins (like the
looper or colvolotron). The interface was kept close to Rakarrack, but some
parameter changes were made for the intent of clarity, such as wet/dry -
instead of the original -64 is all wet, 0 a 50/50 wet/dry mix, 63 dry, now
0 means all wet, 127 all dry. Parameter names were altered in places as
well (e.g. St. Df. now is labeled "Left/Right LFO delay"). Case by case
feedback on these decisions is also welcome.

For those unfamiliar with Rakarrack, here are all the effects, marked to
show which are in this new plugin bundle:

EFFECTS (X - done, W - won't do, + - done with missing features)
[X] Lineal EQ
[X] Compressor
[X] Distortion
[W] Overdrive - exact same engine as distortion, but has fewer controls,
presets were added to dist.
[X] Echo
[X] Chorus
[W] Phaser - I'm not planning to do this. I'm only interested in the analog
phaser version
[X] Analog Phaser
[W] Flanger - this is identical to the chorus, presets from this will be
included there
[X] Reverb
[X] Parametric EQ
[X] Cabinet Emulation
[X] AutoPan/Stereo Expander
[+] Harmonizer - midi mode was not ported
[X] Musical Delay
[W] Noise Gate - Direct copy of Steve Harris's Gate ladspa plugin
[X] WahWah
[X] AlienWah
[X] Derelict
[X] Valve
[X] Dual Flange
[X] Ring
[X] Exciter
[X] DistBand
[X] Arpie
[X] Expander
[X] Shuffle
[X] Synthfilter
[X] VaryBand
[W] Convolotron - other excellent lv2 convolution engines already exist
[W] Looper - other good lv2 loopers exist
[X] MuTroMojo
[X] Echoverse
[X] CoilCrafter
[X] ShelfBoost
[X] Vocoder
[X] Sustainer
[X] Sequence
[X] Shifter
[X] StompBox - an extra plugin for fuzz mode was added, as interface is
[X] Reverbtron
[X] Echotron
[+] StereoHarm - no midi mode
[X] CompBand
[X] Opticaltrem
[X] Vibe
[X] Infinity

Further help using them can be found at . I hope you all find these
enjoyable, and make great music with them.

Please download and try them at:



by at July 31, 2015 08:40 PM

July 29, 2015

Create Digital Music » open-source

Drum machines in your browser, and more places to find Web Audio and MIDI


Open a new tab, and suddenly you have a powerful, sequenced drum synth making grooves. Give it a shot:
Or read more. (This latest creation came out in June.)

This is either the future of collaborative music making or the Single Greatest Way To Make Music While Pretending To Do Other Work I’ve ever seen.

But, as a new effort works on sharing music scores in the browser, it’s worth checking up on the Web Audio API – the stuff that makes interactive sound possible – and connections to hardware via MIDI.

And there’s a lot going on, the sort of fertile conversation that could lead to new things.

Web Audio and Web MIDI are quite fresh, so developers around the world are getting together to learn from one another and discuss what’s possible. That includes the USA, UK, and Germany:

New York:

Paris was also host to an annual, international conference, which took place this year at famed research center IRCAM.

Online synths and other proofs of concept are likely just the beginning. Web music development began as a sometimes muddled conversation about whether browsers will replace traditional app deployment (so far, probably not). But as the tech has matured, developers are instead looking to ways to use the Web to create new kinds of apps that perhaps didn’t make sense as standalone tools in “native” software (or, for that matter, hardware).

That’s why it’ll be interesting to watch efforts like Yamaha’s to add browser-based patch editing and sharing for their Reface line. There are also more ambitious ideas, like using the browser to share audio for interviews, radio conversations, backup, and works-anywhere recording and streaming.

And there’s more.

Keith McMillen has a great two-article series introducing you to Web MIDI.

It explains what this is all about and what it can do – whether or not you are a developer, worth reading. And if you are a developer, code snippets!

There’s even some explanation of how to use MIDI code outside of Chrome. (Firefox and even Microsoft’s new Edge promise support soon.)

Making Music in the Browser – Web MIDI API

Making Music in the Browser – Web Audio API, Part 1

And their blog in general is full of surprisingly geeky wonderful stuff, not the normal marketing stuff. (In fact, let’s be fair, you’d fire your marketing manager if they did this. But… kudos.)

When we first started using the Web, it seemed like a clumsy way to duplicate things done better elsewhere. Now, it promises to be something different: a place that takes the software and hardware we love, and makes it more useful and connected. There’s something wonderful about switching the Internet off in the studio and focusing on making music for a while. But in this model, when you do turn the Internet on again, it becomes a place to focus more on music rather than be distracted.

The post Drum machines in your browser, and more places to find Web Audio and MIDI appeared first on Create Digital Music.

by Peter Kirn at July 29, 2015 11:24 AM

July 28, 2015

Pid Eins

Announcing systemd.conf 2015

Announcing systemd.conf 2015

We are happy to announce the inaugural systemd.conf 2015 conference of the systemd project.

The conference takes place November 5th-7th, 2015 in Berlin, Germany.

Only a limited number of tickets are available, hence make sure to sign up quickly.

For further details consult the conference website.

by Lennart Poettering at July 28, 2015 10:00 PM

Libre Music Production - Articles, Tutorials and News

KXStudio Website has moved

KXStudio has been the latest in a line of open source projects to move away from sourceforge. You will now find the KXStudio project hosted on the servers. The new website can now be found at

For full details about the move, see the announcement here.

by Conor at July 28, 2015 11:26 AM

July 27, 2015

Linux Audio Announcements -

[LAA] Qtractor 0.7.0 - The Muon Base is out!


Stepping up to Summer'15 release frenzy stage scene, in it's fourth and
hopefully last act,

Qtractor 0.7.0 (muon base beta) is out!

Qtractor [1] is an audio/MIDI multi-track sequencer application written
in C++ with the Qt framework [2]. Target platform is Linux, where the
Jack Audio Connection Kit (JACK [3]) for audio and the Advanced Linux
Sound Architecture (ALSA [4]) for MIDI are the main infrastructures to
evolve as a fairly-featured Linux desktop audio workstation GUI,
specially dedicated to the personal home-studio.

As a major highlight to this release, there's the advent of regular MIDI
controllers mapping/assignment to main application menu command actions,
just like normal PC-keyboard shortcuts, is being introduced (cf. main
menu Help/Shortcuts...).

Have a 'hotta' Summer'15 ;)



Project page:


- source tarball:

- source package (openSUSE 13.2):

- binary packages (openSUSE 13.2):

- wiki (help wanted!):

Weblog (upstream support):

Qtractor [1] is free, open-source Linux Audio [6] software,
distributed under the terms of the GNU General Public License (GPL [5])
version 2 or later.

- Complete rewrite of Qt4 vs. Qt5 configure builds.
- Revised MIDI Controlllers catch-up algorithm.
- Mixer multi-row layout gets a little bit of a fairness fix.
- Non-continuous MIDI Controllers now have their Hook and Latch options
disabled as those are found not applicable,
- As an alternative to PC-keyboard shortcuts, MIDI controllers are now
also assignable and configurable for any of the main menu command
actions, all from the same old configuration dialog (Help/Shortcuts...).
- Fixed missing Track and Clip sub-menus from Edit/context-menu that
were found AWOL ever since after the Lazy Tachyon beta release (> 0.6.6).
- An off-by-one bar position (as in BBT, bar, beat and ticks) has been
purportedly fixed as long as LV2 Time/Position atom event transfer goes.
- French (fr) translation line to desktop file added (patch by Olivier
Humbert, thanks).
- A new top-level widget window geometry state save and restore
sub-routine is now in effect.
- Improved MIDI clip editor resilience across tempo and time-signature
- Keyboard shortcuts configuration (Help/Shortcuts...) now lists
complete menu/action path where available.
- Fixed in-flight VST plugin editor (GUI) resizing.
- Added support to LV2UI_portMap extension, found really handy for the
cases where you have multiple plugins with different port configurations
and a single common UI to drive them all (pull request by Hanspeter
Portner aka. ventosus, thanks).


[1] Qtractor - An audio/MIDI multi-track sequencer

[2] Qt framework, C++ class library and tools for
cross-platform application and UI development

[3] JACK Audio Connection Kit

[4] ALSA, Advanced Linux Sound Architecture

[5] GPL - GNU General Public License


See also:

Enjoy && keep the fun.
rncbc aka. Rui Nuno Capela
Linux-audio-announce mailing list

by at July 27, 2015 01:41 PM

Hackaday » digital audio hacks

Make a Microphone Out of a Hard Drive

[Rulof Maker] has a penchant for making nifty projects out of old electronics. The one that has caught our eye is  a microphone made from parts of an old hard drive. The drive’s arm and magnet were set aside while  the aluminum base was diagonally cut into two pieces.  One piece was later used to reassemble the hard drive’s magnet and arm onto a wooden platform.

v2_micThe drive’s arm and voice coil actuator are the key parts of this project. It was modified with a metal extension so that a paper cone cut from an audio speaker could be attached, an idea used in microphone projects we’ve previously featured. Copper wire scavenged from the speaker was then soldered to voice coil on the arm as well as an audio jack. In the first version of the Hard Drive Microphone, the arm is held upright with a pair of springs and vibrates when the cone catches sound.

While the microphone worked, [Rulof] saw room for improvement. In the second version, he replaced the mechanical springs with magnets to keep the arm aloft. One pair was glued to the sides of the base, while another pair recovered from an old optical drive was affixed to the arm. He fabricated a larger paper cone and added a pop filter made out of pantyhose for good measure. The higher sound quality is definitely noticeable. If you are interested in more of [Rulof’s] projects, check out his YouTube channel.

First Version:

Second Version:

Filed under: digital audio hacks, musical hacks

by Theodora Fabio at July 27, 2015 11:01 AM

July 25, 2015

KXStudio News

KXStudio Website has moved

Hey all,

As you might have noticed sourceforge has been out of service for a while now.
That, coupled together with the previous adware/spyware fiasco led to me look for alternatives.

So you can now find the KXStudio website at

The KXStudio repositories have already been updated to NOT use sourceforge anymore.
New releases will be hosted at github, possibly mirrored in google drive and
I've made some changes to make the website and repositories more easy to move in case something like this happens to github too.

Sorry for any inconvenience.

by falkTX at July 25, 2015 09:23 PM