planet.linuxaudio.org

December 21, 2014

Talk Unafraid

Setting up the Taranis X9D+ and OpenTX

With more complex stuff on my quad comes an increased need for a more complex radio, so I opted to upgrade from an aging Turnigy 9X to a FrSky Taranis X9D+ with an X8R receiver – this nets me not only 16 channels via S.BUS on the receiver and a telemetry link, but a fancy programmable transmitter!

It’s a bit daunting to get it all set up but pretty easy once you get the hang of it. This is going to be a quick writeup of how I went about setting it up, configuring the transmitter for the Pixhawk flight controller and flight modes, and some nice things to know that aren’t that clear from the docs.

So the first thing you want is the OpenTX Companion. The companion will handle downloading and flashing firmware for you. On Windows 7 you’ll also need the zadig utility; plug your radio in (switched off) and then open zadig, find STM32 BOOTLOADER and hit the big button. That’ll get your drivers set up right.

When you start the Companion will ask if you want to go ahead and flash the radio – don’t yet. You want to dive into Settings -> Settings, tell it which radio you’re using, and check “noheli” and “lua” options. Other stuff it’s worth setting in there include the automatic backup path in application settings, and the SD structure path. The SD structure path points at a local copy of the SD card so you can get sounds etc.

Which brings us to the SD card. Pop it out, back up the contents somewhere, wipe it clean and replace the contents with this copy off the FrSky website. This includes an extensive sound set and is generally up-to-date. You’ll also want to keep a copy of this (with any changes you make) on your PC for the simulator and settings dialogs.

Flash, aaa-ah! Saviour of the universe!

Now we should have a radio with a newly-filled miniSD card, but some old firmware. Plug the radio in on USB with the radio powered down, and then hit the “Download” button in OpenTX Companion. Download the latest firmware somewhere and then hit Read/Write -> Write Firmware. Once it’s done you can reboot and check it all works.

Once you’ve done that, power down the radio again, unplug it, and start it up while pressing the two bottom trim switches towards the power switch. This gets you into the bootloader. Plug in your USB cable and your SD card will show up in Windows – but most importantly you can go ahead and hit “Read models from radio”. This gets you a copy of everything in your radio to start with. You can then proceed to start tweaking! You need to go back to this mode to be able to write stuff, too.

Mixing it up

Let’s open a model up and take a look. For starters we’ve got the setup page. This has simple stuff like what radio system to use, switch and pot warnings, and timers. Timers are helpful for keeping track of battery usage (though if you get telemetry set up, that’s good too) – I have one set up to count down my expected battery life using the THs counter.

Next we have the flight modes. Skip this tab for now. Briefly, though, a little aside on terminology.

The Taranis has a bunch of inputs. Throttle, elevator, rudder etc are named what you’d expect but other physical controls are things like SA, SB, etc. There’s also S1/S2, and L1/L2. The latter four are pots – S1/2 on the front, L1/2 on the sides. The former are switches – you have $MANY on the X9D+. Where you see things like an arrow pointing up/down next to a switch, that’s going to evaluate to “on” if that’s the state the switch is in.

In inputs we can assign each physical control to an input channel. I’ve done this for four switches. The benefit of adding these here rather than directly in the mixer is that we can apply things like curves to them. This is actually very useful for the Pixhawk. The Pixhawk/PX4 expects 3 two-position switches and a single 3-position switch for selecting your flight mode. The Taranis is overequipped with three-position switches, so I’ve set up a curve that treats positions 2 and 3 identically. To do this, go into Curves, pick a Curve (eg Curve 2), select 4 points, and set them at -100,-100;-50,-100;-1,100;100,100. Then you can pick Curve 2 for that input.

But back to the mixer. What’s it all about? The mixer is what actually produces the output values. In most cases for multirotors you just want to pass through your input channels, but you might want to do something fancier. For that Pixhawk setup, simply set channels 5-8 to correspond to inputs 5-8 where you’ve put your switches.

It seems logical, captain

Logical switches are kinda neat. Let’s say you have, as in our example, a 3 position switch with some 2 position switches cascaded from the output. In each combination we derive a logical state and it’d be nice to announce that, since the Taranis has audio playback.

I used the festival tts “text2wave” utility to generate some WAV files, and then used ffmpeg to resample them to mono 32kHz files. This is required for playback to work. Sounds simply drop into the SOUNDS/en/ folder. Note there’s an 8 character file name limit, though, not counting the .wav! Very long filenames won’t show up in the radio and will get truncated in settings.

In logical switches, we can take, for instance, L1, and set the function a<x. The OpenTX documentation lists the functions and what they do; this is a simple less than comparison.  We can point V1 at a channel in the mixer, for instance CH5, and set V2 to a constant like 0.  L1 being on now corresponds to the default (up) state of my 3-position switch being on.

This particular thing you can do by looking at the SA booleans, but logical switches have an AND switch function, too. This means we can, for all our two-position switches, only turn on our logical switches if the top-level switch is in the right position.

Special Functions are on the next tab. In here we can set actions for each switch or logical switch, like “Play track” to play some audio. I’ve got my radio set up to play the right bit of speech to tell me which flight mode I’m now in, eg “Full auto”, “RTLS”, “Manual”, “Position hold” etc.

This really is the tip of the iceberg – you could much more easily (in theory) do my Pixhawk flight mode announcement in Lua, as the radio supports scripting, but I’ve not had the time to figure out how to just yet! Combined with telemetry you can do automated warnings and announcements for altitude, speed, battery and more. It’s a very cool bit of kit and worth persevering with to get it set up just right for your use-case.

by James Harrison at December 21, 2014 09:41 PM

Hackaday » digital audio hacks

World’s First Smart Snowboard Changes Music According To Your Actions

Ever wanted a soundtrack to your life? For a couple of minutes at a time, Signal Snowboards creates that experience with a smart snowboard that varies your music depending on the tricks you perform on your way down the mountain.

The sign on the door says “School For Gifted Hackers”. Inside [Matt Davis] helped interface audio with an accelerometer – something he regularly does with all manner of hacked devices. At first the prototype was an iPhone mimicking the motions of a snowboarder the way fighter pilots describe dogfights with their hands. The audio engine that pulls those mostions to sound is open source and anyone is welcome to do their own tuning.

Once the audio was figured out the boys took it back to their shop and embedded the sensors into a new snowboard. The board is equipped with GPS, an accelerometer, a few rows of LEDs and a bluetooth board to connect to the phone app. It’s all powered by an on-board LiPo battery and a barrel jack out the side to charge it. Channels were cut by hand with a router then electronics sealed in place with epoxy. Not wanting to “just strap some Christmas lights onto a snowboard” the lighting is also connected to the sensors and is programmable.

See the video below of them making the board and taking it out for a test run on Bear Mountain.

Thanks [Ronald] for the tip.


Filed under: digital audio hacks

by Matt Freund at December 21, 2014 09:01 AM

December 20, 2014

OpenAV

OpenAV features in UbuntuUser Magazine! (+ more)

OpenAV features in UbuntuUser Magazine! (+ more)

OpenAV in Ubuntu User magazine!  Checkout the full interview here, and thanks again to Sam Tuke for doing the interview, and countless hours of editing. It was great to get to chat about all things linux-audio, and reading it now, it is easy to see the progress made since the interview was done. Amazing things in store for Linux audio! ArtyFX 1.3… Read more →

by harry at December 20, 2014 08:26 PM

Libre Music Production - Articles, Tutorials and News

EQ10Q V2 Beta6 is now available

Pere Rafols Soler has just released EQ10Q V2 Beta6. EQ10Q is a powerful and flexible parametric EQ. Alongside the download you will find a gate and a compressor plugin. With beta6 there is now also a new plugin called BassUp. You can read a full description about each plugin here.

New features in this release include -

by Conor at December 20, 2014 04:33 PM

GStreamer News

GStreamer Core, Plugins and RTSP server 1.4.5 stable release

The GStreamer team is pleased to announce a bugfix release of the stable 1.4 release series. The 1.4 release series is adding new features on top of the 1.2 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework that contains new features. The 1.4.x bugfix releases only contain important bugfixes compared to 1.4.0.

Binaries for Android, iOS, Mac OS X and Windows are provided by the GStreamer project for this release. The Android binaries are now built with the r10c NDK and as such binary compatible again with all NDK and Android releases. Additionally now binaries for Android ARMv7 and Android X86 are provided. This binary release features the first 1.4 releases of GNonLin and the GStreamer Editing Services.

The 1.x series is a stable series targeted at end users. It is not API or ABI compatible with the 0.10.x series. It can, however, be installed in parallel with the 0.10.x series and will not affect an existing 0.10.x installation.

The stable 1.4.x release series is API and ABI compatible with 1.0.x and any other 1.x release series in the future. Compared to 1.0.x it contains some new features and more intrusive changes that were considered too risky as a bugfix.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, or gst-rtsp-server, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, or gst-libav, or gst-rtsp-server.

Check the release announcement mail for details and the release notes above for a list of changes.

Also available are binaries for Android, iOS, Mac OS X and Windows.

December 20, 2014 03:00 PM

December 19, 2014

ardour

Nightly Builds Now Available

Thanks to the amazing hardwork of Robin Gareus and funding from Harrison Consoles, ardour.org now has nightly builds of the development version of Ardour available. Read more details below.

read more

by paul at December 19, 2014 03:27 PM

Sam Tuke » Audio

My Linux Audio interview in Ubuntu User

It was months in the making, finally reached news stands last month, and now it’s free to read online. That’s right, you can read my five page interview with Harry van Haaren on the Ubuntu User website. The printed copy looks much prettier however, and also includes a three page guide to using Harry’s suite […]

by samtuke at December 19, 2014 02:45 PM

Nothing Special

JACK Midi Control from your Android Phone over WiFi

Since my fundraising for my plugin GUIs is more or less bust, I've been thinking more about making music. I can't really stop developing, cause I'm fairly addicted, but I'm trying to ween myself back to just making whatever I want, rather than trying to develop stuff that will change the world. The world doesn't care that much. Anyhow, this blog is no place for bitterness.

But I have been playing with synths a bit more lately and still am really missing my expression controls. Now I could try to use the old touchmidi app I developed, but it only works with my laptop, and I now have a dedicated desktop in the studio to host my synths so I don't have a touchpad to use. I do have several android devices though. They should be great input interfaces. They can display whatever they like on the screen and have a touch interface so you can have arbitrary configurations of buttons, sliders, knobs, whatever. So I decided to figure out how.



There are a few tools you need, but first: an overview. The key to sending control information from one device to another is OSC (Open Sound Control), which in a nutshell is a way to send data over a network, designed for media applications. We need something to interpret touch input on the phone or tablet and send an OSC message. Then something needs to receive the message, interpret it, convert it to a midi message and send it out a JACK midi port. Well, we aren't inventing the wheel, there are several programs that can do these or related tasks.

One closed source system is TouchOSC. They have a free desktop app and the android app is a few dollars. But its closed source and doesn't actually work on linux. UNACCEPTABLE.
There are really several other apps, many of them free and none of them have had updates in the last few years. Ardroid is an OSC controller, but it is meant for ardour, you can't make custom interfaces like you can with TouchOSC.

What we want is Control. It hasn't been updated for a few years and its kinda buggy, but its OPEN. I could go fix it from the github source (but it would take me a bit of research on android stuff and I'm not a java fan) but its good enough as it is to get by. It uses customizable interface layouts written in JSON and you can add scripting to the interface through javascripts, so no crazy new formats. The real bugs are just in the landscape and portrait mode switching, so I have to open and close the interface a couple times before it gets it right. It's also cross platform.

I was able to make an interface after a few infuriating late night hours of trying to edit one of the examples (that I'm pretty sure had a lot of iOS specific JSON tags), then trying again next morning using the blank template and getting it done in about an hour. I never learn. Its a little involved to make a custom interface but there are good examples and decent documentation. It seems after a while of development the author focused much more on iOS and neglected the Android side, so there are several tags like color that don't work on android and make the interface much buggier if you attempt to use them. If someone wants to be super rad they would take this app and fix it up, make it a bit more robust and report json errors found in layouts etc... But its good enough that I'll just keep using it as it is.

For starters, you can just use the interfaces the app comes with. Go ahead and install it from google play. They have a pretty good variety of interesting interfaces that aren't all exactly conducive to midi control (remember apps CAN have built in OSC support like ardour and non-mixer do) but any of the interfaces built into Control will work. I'll tell you a bit about some other interfaces later.

UPDATE: Since I started writing this, another open source android app has come up: andrOSC. Its much simpler but allows you to edit everything in the app. Its a great option for anyone who wants a custom interface without any sort of file editing or extra uploading. Its a little too simple for what I want so I'm going to stick with Control for now.


Now we just need someplace to send the OSC data that the Control interface will output. Well I thought mididings was the tool for this job but it only accepts a few arbitrary OSC commands and there is no way to change the filtering through OSC aside from making scenes and switching between them. So that was a dead end.

But, the libraries used by mididings are openly available, so I figured I'd just make a python script to do the same thing. Except I've only edited a few python scripts and this is a few steps beyond that. The libraries used are actually c libraries anyway, so c wins. Hold on a sec, I'll just whip up a C program.

(Months go by)

Ah that was quick, right? Wait, Christmas is in how many days?!
Oh well. It works. And its completely configurable (and more features on the way). This gives you the flexibility to change the mapping wherever its easiest, in your OSC client app (i.e. our Control interface), in the osc2midi converter map file, or in the midi of the target program. In another post I'll describe how to use osc2midi more fully (with custom mappings) for now the default one works (because I wrote it to work with all the example interfaces that come with Control).

Download the files, extract them, and install them:
cd osc2midi
mkdir build
cmake ..
make
sudo make install

Then run:
osc2midi


Not too bad (if it didn't work cmake should help you, or just check the README for the dependencies). This gives you a jack midi client and opens an OSC host at 192.168.X.X:57120 (Your computer's IP address : Default port). If you don't know your ip address you can get it with the terminal command ipconfig. Connect the jack midi out port to Carla or whatever you like (I'll proceed with assuming you are using Carla).

Now we need to connect the Control app to the OSC server. Click the destinations tab and the plus sign at the top to enter in the address of the OSC host. From there go back to the interfaces tab and select whatever interface sounds like it will tickle your fancy. For this exercise we'll go ahead and use the mixer interface. If it looks squashed or doesn't fit on the screen you probably have to go back to the menu and open the interface one more time. I always have to open it twice like that (obnoxious, right? you should really go fix that!).

In Carla, load a plugin. Simple amplifier is perfect. First connect the osc2midi output port to the simple amplifier events input port. Open the editing dialog for the plugin and click the parameters tab at the bottom. Simple amplifier only has a single parameter (gain) and so you'll see a single horizontal slider and 2 spinboxes right of it. These numbers select the midi control number (CC) and channel.

Since we called osc2midi without arguments it defaults to channel 1 (or 0 if you prefer 0 indexing) and the first slider in the mixer interface on your android device will send midi CC 1 so select "cc #1" in the middle numeric box. You just bound that midi control to change the amplifier gain. If your jack ports are all set up correctly you should be able to change the first slider on your phone/tablet and watch the slider in Carla wiggle up and down with it. You have CONTROL!!!

The setup
 The other sliders are just sequentially mapped through CC 12. You can figure out what anything is mapped to by either using kmidimon to look at the midi output or running osc2midi in verbose mode (osc2midi -v). You can add other plugins and control them by the same method. Get creative with it.

For my next trick, we'll make some sounds with Control. Hit the menu button on Control and select the Conway's Game of Life interface. The default mapping works but I include a better one with the installation. So in the terminal hit Ctl+C to stop osc2midi. We'll start it again specifying a different map file. Type in:
osc2midi -m gameOfLife.omm
And the server will be restarted on the same port so you won't need to change anything else.

Now lets load an instrument plugin into Carla. I like amsynth but you can use any you like (though a polyphonic one will probably be more fun for this exercise). With an instrument plugin loaded just connect the midi-in port of the instrument to the osc2midi output the the audio output from your plugin/carla to your soundcard. Click several of the buttons on the grid in Control and you should hear some notes start playing. They will stay on till you click it again. Or click the start button and the notes start toggling themselves and you have a nice random sequencer.

This isn't quite enough for simulating all the expression you can get out of a good synthesizer or midi controller, but its a start. The best part is these open tools are all highly configurable, so we have a working system with just using all the defaults. But we can also setup very powerful and custom configurations by tweaking the interface in Control, the mapping in osc2midi, and often the midi bindings in your destination program (like Carla). But we'll save the rest for another post. In the mean time, try all the templates and get creative with these tools.

by Spencer (noreply@blogger.com) at December 19, 2014 09:15 AM

December 18, 2014

Nothing Special

Easy(ish) Triple Boot on 2014 Macbook Pro

Nothing is easy. Or perhaps everything is. Regardless, here is how I did it, but first a little backstory:

I got a macbook pro 11,3 from work. I wanted a lenovo, but the boss wants me to do some iOS stuff eventually. Thats fine, cause I can install linux just as easily on whatever. Oh wait.. There are some caveats. Boot Camp seems to be a little picky. Just as well. MIS clowns set up boot camp so I had windows 7 and Yosemite working, but they told me I'm on my own for linux. It seems from the posts I've read about triple booting is that you have to plan it out from the get-go of partitioning, not just add it in as an afterthought. But I also found suggestions about wubi.



I've used wubi and didn't really understand what it did, but its actually perfect for setting up a triple boot system in my situation (where it's already dual boot and I want to tack on linux and ignore the other two). There is a lot of misunderstanding that wubi is abandoned and no longer supported bla bla. The real story is that the way wubi works doesn't play nicely with windows 8. Therefore if it doesn't work for everybody Ubuntu doesn't want to advertise it as an option. Its there, but they'd rather have everyone use the most robust method known: full install from the live cd/usb. Not that wubi is rickety or anything, but only works in certain situations (windows 7 or earlier). The reality is its on every desktop ISO downloaded, including latest versions (more on that later).

The way wubi works is important to note too (and its the reason that its perfect for this situation). Wubi creates a virtual disk inside the NTSC (windows) partition of the disk. So instead of dividing the hard drive space into two sections (one for linux, one for windows, and/or a third for OSX if triple boot) it doesn't create disk partitions at all,  just a disk file inside the existing windows partition. The windows bootloader is configured to open the windows partition then mount this file as another disk in whats called a loopback mode. This is distinctly contrasted to a virtualized environment where often a virtual disk is running on virtual hardware. You are using your actual machine, just your disk is kinda configured in a unique but clever way.

The main downside it sounds like is that you could have poor disk performance. It sounds like in extreme cases, VERY poor performance. Since this machine was intended for development its maxed out with 16GB ram, so I'm not even worrying about swap, and the 1TB hdd has plenty of space for all 3 OSes and its a fresh install so shouldn't be too fragmented. These are the best conditions for wubi. So far it seems to be working great. Install took a little trial and error though.

So I had to at least TRY to teach you something before giving you the recipe, but here goes:

  1. I had to install bootcamp drivers in windows. MIS should have done that but they're clowns. You'll have to learn that on your own. There are plenty of resources for those poor mac users. This required a boot into OSX.
  2. Boot into windows.
  3. Use the on screen keyboard in the accessibility options of the windows to be able to hit ctl+alt+delete to make up for the flaw that macbooks have no delete key (SERIOUSLY?) Also don't get me started on how I miss my lenovo trackpoints.
  4. I installed sharpkeys to remap the right alt to be a delete key so I could get around this in the future. I know sooner or later Cypress will make me boot into windoze.
  5. Download the Ubuntu desktop live CD ISO (I did the most recent LTS. I'm not in school any more, gone are the days where I had time to change everything every 6 months).
  6. In windows install something that will let you mount the ISO in a virtual cd drive. You could burn it to CD or make a live USB, but this was the quickest. I used WinCDEmu as it's open source.
  7. Mount the ISO and copy wubi.exe off of the ISO's contents and into whatever directory the ISO is actually in (i.e. Downloads).
  8. Unmount the ISO. This was not obvious to me and caused an error in my first attempt.
  9. Disable your wifi. This was not obvious to me and caused an error in my second attempt. This forces wubi to look around and find the ISO that is in the same folder rather than try to re-download another ISO.
  10. Run wubi.exe .
  11. Pick your install size, user name, all that. Not that it matters but I just did vanilla ubuntu since I was going to install i3 over the Unity DE anyway. Historically I always like to do it with xubuntu, but I digress.
  12. Hopefully I haven't forgotten any steps, but that should run and ask you to reboot. (I'd re-enable the wifi before you do reboot, or else you'll forget like I did and wonder why its broken next windows boot).
  13. The reboot should complete the install and get you into ubuntu.
  14. I believe the next time you reboot it will not work. For me it did not. Its due to a grub2 bug I understand. Follow the solutions in these two threads: 
    1. http://ubuntuforums.org/showthread.php?t=2218439&p=13094149#post13094149
    2. http://ubuntuforums.org/showthread.php?t=2217829&p=12996954#post12996954
  15. To roughly summarise the process, hit the e key to edit the grub config that will try to load ubuntu. Edit the line
    linux /boot/vmlinuz-3.13.0-24-generic root=UUID=bunchofhexidec loop=/ubuntu/disks/root.disk ro

    ro should be changed to rw. This will allow you to boot. The first post tells you to edit an auto-generated file. Thats silly. what happens when it gets auto-generated and again and overwrites your fix? It even says not to edit it in the header. Instead you need to make a similar change to the file that causes it to have that error and then generate those files again as described in the second link.
  16. Once that is sorted out you'll probably notice that the wifi is not working. You can either use an ethernet port adapter or a USB wifi card (or figure out another way) but get internet somehow and install bcmwl-kernel-source and it should start working (maybe after a logout. I don't remember).
  17. Another tweak you will need is that this screen has a rediculously high DPI so the default fonts are all teensy-tiny. The easiest workaround is just to lower the screen resolution in the displays setting of unity-command-center, but you can also edit the font sizes in that dialog and/or using unity-tweak-tool. I'm still ironing that out. Especially since my secondary monitors are still standard definition. xrandr --scale is my only hope. Or just lower the resolution.
  18. You might find that the touchpad click doesn't work as you expect. Try running the command:
    synclient ClickPad=0
    and see if you like it better. I sure do. Also enable two finger scrolling in the unity-control-center.
  19. Also, importantly, wubi only allows up to 30GB of virtual disk to be created. I wanted a lot more than that. So I booted off a USB live stick I had laying around and followed the instructions here to make it a more reasonable 200GB.
  20. Finally install i3-wm, vifm, eclipse, the kxstudio repos and everything else you love about linux.
So I love my macbook. Because its a linux box.

by Spencer (noreply@blogger.com) at December 18, 2014 04:14 PM

December 17, 2014

Scores of Beauty

Catching up with the Mutopia Project

One of the truly impressive parts of the broader LilyPond “ecosystem” is the Mutopia Project. It currently offers an astounding 1888 pieces of music for free download in LilyPond, PDF, and MIDI formats. Every single piece is in the public domain or licensed under a Creative Commons license. What’s even more amazing is that they were all typeset by volunteers. In this post I will discuss some recent progress in the Mutopia Project, and acknowledge the valuable work of the volunteers who contribute to it.

Updating Older Files

Over the years as LilyPond continues to improve with each new version, the files offered by the Mutopia Project get further and further behind the current version of LilyPond — unless they are updated. Since the project was begun before 1999 (not long after LilyPond herself was born in 1996) it’s no surprise that some of the files are for relatively ancient versions of LilyPond. Until recently there was even one file that went back to LilyPond version 1.4.7 (released in 2001)!

Back in January of 2014 Glen Larsen initiated an effort to update the oldest of the Mutopia Project files. At that point there were 15 files for LilyPond version 1.x.x and these were targeted for the first phase of the effort. Several volunteers stepped up to update these oldest (and most difficult to update) files, including Federico Bruni, Glen Larsen, Javier Ruiz-Alma, Francisco Vila, and Valentin Villenave. (Let me know if I’ve missed anyone!)

A second update in February tackled another 15 files — those in the collection that were at version 2.0.x. All of these updates were submitted in only 11 days, as Karsten Richter joined the ranks of the volunteers.

In September a third update took on the 28 files in the collection at version 2.1.x. Several more volunteers came on board, including myself (Paul Morris), Abel Cheung, Felix Janda, and Knute Snortum.

Recently a fourth update has begun that targets the 27 files at version 2.2.x.

These efforts are coordinated through the Mutopia Project’s mailing list and GitHub repository. All of the Mutopia Project’s files are now hosted on GitHub, and you can see the “milestones” for updates one, two, three, and the current update four.

If you think about it, it is pretty impressive that files this old can still be successfully updated (with help from convert-ly, of course). That kind of longevity is not common in the quickly evolving world of technology where today’s “shiny and new” quickly becomes tomorrow’s “old and obsolete.”

One of the main motivations behind these updates is to simplify the maintenance of the Mutopia Project. To quote from this wiki page on updating files: “As the Mutopia Project grows, so does the job of maintenance, and maintenance is easier if all the Mutopia files stay as current as possible. The goal is not to keep all Mutopia files at the current stable release version of Lilypond, the goal is to have our archive use as few Lilypond releases as possible.”

Some additional benefits include better quality engraving of the updated files — since LilyPond’s engraving has come a long way since these these earlier versions. And of course it’s a better experience for users when the files they encounter are for more recent versions of LilyPond. The older the file the greater the chance that there will be difficulties when they update it.

Adding More “Popular” Works

In addition to the work updating older files, Javier Ruiz-Alma has collaborated with the IMSLP (another online library that primarily provides PDF scans of sheet music), to identify “popular” pieces of music (i.e. those that are more sought-after) that are not yet available from the Mutopia Project. They identified the most downloaded works on the IMSLP, those that get over 1,000 downloads a month, and came up with a list of 12 of these that were not yet in the Mutopia collection. (Unfortunately the Mutopia Project does not currently have the infrastructure to track its own downloads.)

Javier then organized an effort to add these 12 pieces to the Mutopia collection, offering them in LilyPond, PDF, and MIDI formats (whereas on IMSLP they are typically only available as scanned PDFs). As Javier put it in an email to the Mutopia mailing list, the goal is “to continue increasing the value Mutopia offers to those who search our site for free music, and improve the chance they’ll find the music they’re looking for.”

The following volunteers have contributed to this ongoing effort: Joram Berger, Federico Bruni, Abel Cheung, Glen Larsen, Javier Ruiz-Alma, Knute Snortum, and Steve (@stevetnz). There is a milestone tracker on GitHub where you can see its current status.

New Mutopia Project Footer

The official Mutopia Project footer that appears in each piece of music was redesigned in the past year. Here’s an example of the previous version:

old-mutopia-tagline

And an example of the new version:

new-mutopia-tagline

The new version looks great and is quite an improvement!

Volunteering

There are various different ways to help out with LilyPond directly. Volunteering with the Mutopia Project is another way to support the health of the broader LilyPond “ecosystem.” Anyone who knows how to use LilyPond can contribute. If you think you might be interested, then check out the Mutopia Project wiki, and/or inquire on the mailing list.

Kudos to all of the Mutopia Project volunteers for their ongoing work maintaining and improving this valuable resource! And a special thanks to Chris Sawer who has been leading the project since 1999!

by Paul Morris at December 17, 2014 08:00 AM

Talk Unafraid

Going mobile – my quadcopter so far

So over the last year or two I’ve been intermittently doing stuff with unmanned aerial systems. Nothing for work, strictly hobbyist stuff, and strictly for fun, though with a serious goal in mind.

More or less every year now the village I live in floods. The degree to which it does so varies, as does the response from the council and locals. Last year we managed to get aerial photography from some friends with a light aircraft handy which was fascinating to see – we could start to see the bounds and patterns of the flooding in context. Trouble is, it took a while to arrange and we only got one set of pictures.

Wouldn’t it be nice to be able to get more pictures, faster? Here’s my story so far in the wonderful world of multirotors…

Right away I made my first mistake: I bought into the UAir kickstarter for my very own little quadcopter. The UAir kickstarter turned out to more or less be a scam (and the company disappeared, only to reappear months later in South Africa under a different name), delivering an unflyable vehicle, broken hardware, awful flight controllers and telemetry. The code, far from being state of the art, was atrocious, and lacked key features. The hardware turned out not to be well-designed, either. The metal frame was prone to bending, and mounting options were limited. The speed controllers could easily short themselves mid-flight and the motors were hard to source, used collet-type rotor clamps that in two cases detatched during flight and in one case snapped the motor shaft off on landing.

Having learned that expensive lesson I went about replacing parts until I had something flyable. My first iteration simply took the existing flight hardware, and replaced electronics; the speed controllers with JP 40A controllers, and the flight controller with a MultiWii – this got me flying, briefly. Then a motor failed mid-flight, flipping the quad and delivering it at some speed into a fencepost, breaking the frame entirely (and taking a chunk out of the post)

Old frame etc on the rightOld (bent) frame etc on the right, new frame base and landing gear on the left prior to assembly

For revision three, I decided it was time to retire the metal frame and replaced it with a clone of the DJI F450 glass fibre/plastic frame, adding a set of landing gear to provide some additional clearance to hang payloads. I also swapped the MultiWii for an OpenPilot CC3D, which cost about the same but had a massively nicer set of tools and much more actively developed code. The CC3D was the first flight controller I’d used to feature the STM32 series of microcontrollers, which are a huge improvement over the 8-bit Atmel based flight controllers I’d used previously. As well as having more room for complex code, the sensor filtering and fusion algorithms you can fit into these chips is clearly worlds ahead of things like the ATmega series.

CC3D on the quadCC3D on the new frame. 9X receiver on the right, Arduino for lights on the left. Foam for basic vibration isolation

The F450 frame has served me well so far. I replaced my Keda K20 motors with a set of 2217 880KV motors after a bearing failure on one of the K20s; the 880KV number is effectively a torque rating – these motors are decent mid-range lifters, and the quad can happily lift about 800g of payload in addition to its dry mass of about 1.5 kilos (including battery). It might carry more, but I’ve not tried yet.

Slung below the frame is a battery mount and vibration isolating gimbal mount, all hanging from a set of carbon fibre load tubes and both themselves carbon fibre. I tried several options for mounting cameras, computers and batteries before settling on this option for now. My first attempt was to build a battery housing which would hang from the load tubes and be vibration isolated, and then to mount the camera on that ‘clean’ section. This worked, but the balsa construction failed catastrophically in a ~300ft crash, and I opted for a more rigid approach. The crash was the first serious crash I’ve had – I flew nose-in and lost my bearings, tried to correct but failed and ended up cutting the motors when I couldn’t see it clearly. The tree actually cushioned the blow…

The view from the camera after the worst crash I've had, post-flyawayThe view from the camera after the worst crash I’ve had, after a flyaway – that’s a very deep lake and this was feet away!

Remarkably, all of my hardware was ready to fly again immediately, sans one chipped prop blade.

The broken balsa battery mountThe broken balsa battery mount with Raspberry Pi+Camera (inverted)

While I don’t yet have a gimbal – widely accepted as being a non-optional addition to any multirotor for smooth video or photography – I do have a camera in the form of a Raspberry Pi B+ and camera module, seen above. It’s not perfect, but it does work quite well. I have some Python code which handles locking off the white balance and exposure settings on startup to avoid the sky causing the camera to overcompensate, causing the ground to just appear as a dark blob.

I’m also trying to get this video multicast back down to a ground station, but that’s taking some time – I have an old camera bag with a Ubiquiti Bullet M2 access point and Mikrotik 750G router which the Pi will connect to if it can see it. Latency is a major issue for real-time feedback to help adjust flight; there looks to be a lower bound on that of around 100ms with the Pi’s video encoder etc.

Looking towards BablockhytheLooking towards Bablockhythe Same view two days laterSame view two days later after some heavy rain

I’m lucky to live in a rural area where flying is straightforward, from a legal compliance perspective – it’s easy to stay the required distances from people and buildings. I am (just) within the RAF Brize Norton CTR airspace so can’t exceed 400 feet in altitude but that’s really not an issue; I’m actually pretty certain I’m fine to fly higher given I’m flying a sub-7kg drone non-commercially, but I’ll be talking to Brize Norton to ask permission before I do so, just to be sure. It’s easy to see anything coming remotely close and get on the ground fast, so from a purely practical perspective it’s not an issue, but I’ve been making sure I comply with absolutely all of the CAA guidance and requirements throughout all this. I’m also a fully paid-up insurance-holding British Model Flying Association member – this gives me third party liability cover in the event of an accident.

workingLooking back towards home – I’m the little speck in the field Looking back across NorthmoorLooking back across Northmoor in cloudy weather – the row of houses in the middle of the shot is Griffiths Close

The only major downside to all of this is simply cost. It’s not an exaggeration to say that I have spent more on this project than I have on, say, my car, or my main computers. It’s incredibly fun – not just the camera side of things, but actually flying. Items like motors and speed controller are £30 a pop – so now you need four of everything and suddenly that’s £240. Batteries are £40, carbon fibre frame parts range from ~£25 to ~£200 depending on the part, simple flight controllers and radios are <£100, but are limited. Props may only be £5 a pair, but you get through plenty while learning! Then there’s all the ancillary bits and pieces you need – power step-down boards, battery connectors and power wiring, it all adds up.

I’ve done a lot of building my own stuff, too, like soldering up my own power distribution looms and so on; this is something you can avoid doing but it’s much more fun to do so if you ask me!

Current configurationCurrent payload/power configuration – note the rubber mounts holding the Pi/cam to help isolate it from vibration

My latest step is to upgrade the electronics on my quad, now I’ve gotten the actual flight hardware fairly well sorted. This involves replacing the CC3D with the Pixhawk flight controller, which is basically the most powerful flight control board out there (perhaps aside from DJI’s A2/Wookong boards, but neither of them are open source and both are viciously expensive). I’m also replacing my old Turnigy 9X radio with a Taranis X9D+ and X8R receiver combo, which gets me 16 channels of remote control and simple telemetry. The Pixhawk can make use of the MAVLink protocol for in-air control and telemetry, so I’ve got a set of 3DR 433MHz telemetry radios to do just that.

The main enhancement, though, is a GPS board for the Pixhawk to finally add “semi-autonomous” flight to the project. The goal of this next step is to get the quad to the point where it can fly itself, doing position hold and pre-programmed flight routes. I purposefully didn’t choose to do this from the outset so I could learn how to actually fly – I think I have a decent chance of being able to recover from an autopilot failure now. The Pixhawk/PX4 stack, based on the NuttX real-time operating system, provides a really great platform not only to use out of the box but to develop applications on, and to integrate with external computers. This opens all sorts of possibilities up!

Top viewTop view showing general layout of electronics

There’s still a lot to do in order to turn this quad into a serious platform for aerial photography. I’m going to mount a second Raspberry Pi (A+) pointing straight down with the NoIR camera to play around with vegetation analysis, and I still need to get a gimbal and a camera with a larger bit of glass on the front than the Pi camera for decent, stable photos! A first-person-video link would also be a huge bonus and help with orienting and flying the vehicle, but as with the GPS I’m holding off till I’ve had more practice. Still, even with this fairly simple rig I’m flying right now I can take decent photos and video, learn to fly properly and have a lot of fun. If you don’t mind bankrupting yourself, it’s a fantastic hobby.

by James Harrison at December 17, 2014 01:18 AM

December 16, 2014

Objective Wave

Hydrogen with multi-bank per instrument

The functionality of having multiple bank per instrument is now available in Hydrogen’s github:

http://github.com/hydrogen-music/hydrogen

In a nutshell, each instrument can have multiple set of layers and the mixer have additional strips for each bank.

Let’s imagine you have samples for direct and overhead takes of a drumkit, Hydrogen can handle that, the jack outputs per instrument will be separated and you can control in the mixer how much of the direct/overhead takes you want.

Hope you guys and girls get the chance to try it out!


by blablack at December 16, 2014 06:41 PM

December 15, 2014

Libre Music Production - Articles, Tutorials and News

14 ways to contribute to Hydrogen!

The Hydrogen developers have just put up an interesting post on the Hydrogen website, 14 ways to contribute to Hydrogen!

What pops into most peoples heads when they think about contributing towards a software project is coding and bug testing, but there are many other ways to contribute too, especially for end users.

Check out the full list of ways you can contribute to Hydrogen.

 

by Conor at December 15, 2014 10:51 AM

OpenAV

OpenAV: Fabla 2.0 – 9 days in!

OpenAV: Fabla 2.0 – 9 days in!

Its only been 9 days since the initial feature-request which triggered (excuse the pun), but checkout the Wiki! It is kept up to date with features as they are being completed, example? The “Pad Features”, they’re ~60% done! Review the features there, and if there’s something you’d like to add, just post on the wishlist. Code Thanks A lot of code has… Read more →

by harry at December 15, 2014 12:23 AM

December 13, 2014

Libre Music Production - Articles, Tutorials and News

Newsletter for December out now - Tutorials, news and survey

Our newsletter for December is now sent to our subscribers. If you have not yet subscribed, you can do that from our start page.

You can also read the latest issue online. In it you will find:

  • LMP's end of year survey
  • Third installment of 'LMP Asks', with Aurélien Leblond
  • New tutorial
  • New software release announcements

and more!

by admin at December 13, 2014 11:14 PM

OpenAV

Fabla 2.0 : Code and design underway!

Fabla 2.0 : Code and design underway!

A huge amount of progress to report for Fabla 2.0! With the structure of sound-engine designed, and diagrammed, the features of Fabla 2.0 are now set and the coding of the engine has begun. AVTK for the UI OpenAV user interfaces are known by their distinct look and feel: and we’re happy to announce that an all new AVTK is… Read more →

by harry at December 13, 2014 12:51 AM

December 11, 2014

Objective Wave

Libre Music Promotion – Interview

Because self-promotion never hurted anyone, here is an interview I did for the Libre Music Promotion website:
http://www.libremusicproduction.com/articles/lmp-asks-3-interview-aur%C3%A9lien-leblond

I actually cannot stress enough how good http://www.libremusicproduction.com is! The website contains lots of news on music production with free software, tutorials and (as just said) interviews.


by blablack at December 11, 2014 07:53 PM

Create Digital Music » Linux

People Will Come: There’s Already a Free Sample Editor for volca sample

Getting “open” still scares many music manufacturers. Maybe they should double-check those fears.

See, if you add simple jacks (MIDI, audio), if you add driver-less operation (via USB and the like), let alone if you design simple APIs or create open source interfaces, you open the door to people making things that work with your creation, for free. They have to want to be there – but we make music. We love music gadgets. If your gadget is worth using in the first place, it’s worth opening up to other things.

You know. “If you build it … people will come.” The one constant is baseb– um, music, sorry.

At least, the magic is working for KORG. Just days – seriously, days – after getting hold of an open API for the KORG volca sample, there’s a cross-platform sample loading tool for this inexpensive sound gizmo. The volca sample is barely even shipping yet, and someone has created a free utility that works with it for free.

That’s no minor development, either, because one thing that has held at least some readers back from buying a volca sample is that it requires a KORG iPhone/iPod touch utility for loading samples. KORG’s app is cute and clever, but maybe you don’t have an iPhone – or don’t want to be dependent on one.

The Caustic Editor runs on Android. It runs on Mac OS X. It runs on Linux and Windows.

On iOS, it performs tricks even the stock KORG app can’t – like functionality with Audiobus, meaning you can open up sound design possibilities with other iOS apps.

Otherwise, it’s able to do everything you would need to do with samples on the volca sample because KORG wrote a simple SDK that makes it so. (And, honestly, KORG didn’t do that much – they released a simple library for handling samples covering just the basics.)

WAV

The development cost on KORG’s part for open sourcing this library appears reasonably minor (especially with this payoff). Support costs are harder to predict. It’s possible users will grab this utility, ignore all the disclaimers, and call up KORG technical support when they have trouble instead of the developer. On the other hand, it’s also possible that this app will generate new sales – and all sales have some support cost associated with them.

Not everyone is KORG. There’s unique passion for the volca range, because they’re so desirable to begin with. But if you’re not in the business of making a desirable product, you have other problems, anyway.

So, speaking as a sometimes-manufacturer here, we have a choice. We can ask ourselves, What Would Kevin Costner Do?

And while I’m still waiting to get a volca sample to even review, if you’ve got one, or were waiting for a cross-platform utility with sample loading, your prayers have been answered. Here’s what Caustic Editor does:

List

- Record your own samples using your device’s built-in microphone.
- Load any uncompressed, mono or stereo WAV, at any sampling rate or bit depth.
- Apply any of 16 of Caustic’s effects and preview them in real-time, then stamp down and apply more.
- Process waveform audio with Fade In/Out, Normalize, Amplify, Reverse, etc.
- Use Caustic’s C-SFXR to generate retro video game sounds.
- Trim audio precisely, down to individual samples.
- View the frequency spectrum of your audio.

- iOS: Audiobus compatible (receiver)
- iOS: AudioShare compatible (import/export)
- iOS: AudioCopy/Paste compatible (import/export)
- iOS: iTunes file sharing support
- iOS: Open In… support for .wav files.

Volca Sample-specific features:
- Upload to any of the 100 sample slots and keep a database of your device’s state.
- Clear all samples
- Restore factory samples
- Monitor device memory

Caustic Editor for Volca Series “CEVS” [Download]

KORG volca sample [US Product Page]

Caustic is an excellent rack of instruments for both Android and iOS, also worth a look!

Previously: Meet KORG’s New Sample Sequencing volca – And its SDK for Sampling

Hat tip to the incredibly-fast Synthtopia for getting on this story first.

And yes, I’m making fun of myself with the Field of Dreams reference – but, come on, I get to point out when strategic open sourcing really does work.

Sync

XFer

The post People Will Come: There’s Already a Free Sample Editor for volca sample appeared first on Create Digital Music.

by Peter Kirn at December 11, 2014 05:10 PM

December 10, 2014

OpenAV

OpenAV: Fabla 2.0 Initial Discussion

OpenAV: Fabla 2.0 Initial Discussion

Wow! Over the last few days, a huge amount of feedback has been received, and a lot of progress has been made to what Fabla 2.0 will be! Checkout the wiki roadmap for details, and add your feature by posting on the Fabla 2.0 wishlist. Details One of the main discussion topics was if each sample-pad should have individual output ports,… Read more →

by harry at December 10, 2014 11:44 PM

December 08, 2014

Linux Audio Announcements - laa@linuxaudio.org

[LAA] MIDA 0.3.0

From: Mark Karpov <markkarpov@...>
Subject: [LAA] MIDA 0.3.0
Date: Dec 8, 10:35 am 2014

I would like to announce MIDA, version 0.3.0.

Short description:

MIDA is a minimalistic language for algorithmic generation of MIDI
files. MIDA is not interactive in sense that you cannot control result
of its activity in real time, it is intended for producers and should be
used with a DAW. MIDA can help you create variative elements in your
music in a very simple way. Since MIDI can control a lot of different
instruments, power of MIDA is truly great.

Main reason for MIDA development is to create software tool that can be
used in such a way that does not change established workflow, so people
could use familiar plugins and software instruments.

Unlike many programs in field of algorithmic composition, MIDA does not
aim creating complex or peculiar algorithms, because complexity of such
systems rarely translates into emotional impact directly. Instead, MIDA
gives you simple and effective tool to vary parameters of MIDI events.

MIDA does not have GUI, because it doesn't need one. It can work as REPL
(interactive mode) or as translator from source to MIDI files (batch
mode).

MIDA repository is here: https://github.com/mrkkrp/mida

Documentation online: http://mrkkrp.github.io/mida/

Although MIDA is created on GNU/Linux and for GNU/Linux, it can be
compiled on many platforms (but only Windows version is tested).

MIDA is licensed under GNU GPL version 3, it's free software.

Regards,

-- Mark Karpov
_______________________________________________
Linux-audio-announce mailing list
Linux-audio-announce@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-announce

read more

December 08, 2014 11:00 AM

[LAA] Csound 6.04 is released

From: jpff <jpff@...>
Subject: [LAA] Csound 6.04 is released
Date: Dec 8, 10:34 am 2014

After some hiccups and technical issues csound 6.04 is now ready for
use.

As usual files are available on Sourceforge at
https://sourceforge.net/projects/csound/files/csound6/Csound6.04/
Sources, packages and manuals are there

==John ffitch
------------------------------------------------------------------------

============================
CSOUND VERSION 6.04
vRELEASE NOTES VERSION 6.04
============================

This new version has many extensions and fixes; many new opcodes
and significant numbers of internal reworking. There is a new
frontend and iOS and Android version have seen many improvements.

As ever we track bugs and requests for enhancements via the github
issues system. Already proposals for the next release are being
made but the volume of changes require a release now.

The Developers



USER-LEVEL CHANGES
==================

New opcodes:
o pinker generates high quality pink noise

o power opcode ^ now works with array arguments

o exciter opcode, modelled on the calf plugin
,
o vactrol opcode simulate an analog envelope follower

o family of hdf5 opcodes to handle hdf5 format files

o (experimental undocumented) buchla opcode models the lowgate
filter of Buchla

o New k-rate opcodes acting on arrays:
- transforms: rfft, rifft, fft, fftinv
- complex product: complxprod
- polar - rectangular conversion: rect2pol, pol2rect, mags, phs,
- real - complex: r2c, c2r
- windowing: window
- cepstrum: pvsceps, iceps, ceps
- column / row access: getrow, getcol, setrow, setcol
- a-rate data - k-array copy: shiftin, shiftout
- phase unwraping: unwrap


New Gen and Macros:

Orchestra:

o Line numbers corrected in instr statements

o New control operation, while, for looping

o A long-standing bug with macros which use the same name for an
argument has been corrected

o Redefinition of an instrument in a single call to compile is
flagged as an error

o ID3 header skip for mp3 files now properly implemented.

o Errors induced by not defining the location of STK's raw wave
files has been removed

o bug fixed where UDO's could not read strings from pfields

o bug fixed which hid tb opcodes at i-rate

o Attempts to use two OSClisteners with the same port is now
trapped rather than give a segmentation fault
Score:

Options:

Modified Opcodes and Gens:

o stackops opcodes deprecated

o lenarray extended to handle multi-dimensional arrays

o ftgenonce accepts string arguments correctly and multiple
string arguments

o max and min now have initialisation-time versions

o gen23 improved regarding comments and reporting problems

o in OSCsend the port is now a k-rate value

o socksend now works at k-rate

o a number of envelope-generating opcodes are now correct in
sample-accurate mode

o faust compilation is now lock-protected

o mp3 fixed to allow reinit to be used with it.

o In remote opcode the name of the network can be set via the
environment variable CS_NETWORK. Defaults to en0 (OSX) or
eth0.

o invalue, outvalue are available at i-rate as well as k-rate

Utilities: message continues]

read more

December 08, 2014 11:00 AM

[LAA] DrumGizmo 0.9.7 released

From: Bent Bisballe Nyeng <deva@...>
Subject: [LAA] DrumGizmo 0.9.7 released
Date: Dec 8, 10:34 am 2014

We're proud to announce the immediate availability of DrumGizmo version
0.9.7!

New features / fixes include:
* Resampling now implemented and working
* Gui lineedit fixes
* Global gui config file

Download it from http://www.drumgizmo.org

Visit us at the official irc channel at the Freenode network. Channel
name #drumgizmo. We would love to hear from you!

// The DrumGizmo team
_______________________________________________
Linux-audio-announce mailing list
Linux-audio-announce@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-announce

read more

December 08, 2014 11:00 AM

Libre Music Production - Articles, Tutorials and News

DrumGizmo v0.9.7 released

The Drumgizmo team have just announced the release of version 0.9.7 of Drumgizmo. The most notable change in this release is the implementation of realtime resampling. This means that Drumgizmo will match, in realtime, whatever samplerate you are working with, so you are no longer restricted to 44.1kHz! 
 
Check out the announcement at Linux Audio Users mailing list for full details
 

by Conor at December 08, 2014 09:51 AM

OpenAV News

OpenAV have recently redesigned their website. They are also looking for some feedback relating to some of their projects. 
 
If you are a VJ or projection-master? OpenAV would like to hear from you about your dream workflow at an event, what is it that you need and what issues you encounter often.  
 

by Conor at December 08, 2014 09:40 AM

December 06, 2014

OpenAV

Fabla 2.0 : User feedback wanted

Fabla 2.0 : User feedback wanted

Fabla, a drum-sampler is currently being re-designed by OpenAV. Fabla 2.0 will include lots of new functionality, including per-pad FX, faster previewing of samples, and true-stereo samples. Get invoved, let OpenAV know what features you want! The list of features that will be supported are available at https://github.com/harryhaaren/openAV-Fabla/wiki Contributing Ideas So you wanna collaborate? Great! Post ideas, requests, and discuss… Read more →

by harry at December 06, 2014 02:13 AM

Linux Audio Announcements - laa@linuxaudio.org

[LAA] [LAD] [LAU] Guitarix 0.32.1 Bug-fix release

From: hermann meyer <brummer-@...>
Subject: [LAA] [LAD] [LAU] Guitarix 0.32.1 Bug-fix release
Date: Dec 5, 11:15 pm 2014

Bug-fix release 0.32.1 is out, update is recommended!!

This release fix a long outstanding issue with LADSPA/LV2 plugin
load/unload and UI modification.

Please refer to our project page for more information:
http://guitarix.sourceforge.net/

Download Site:
http://sourceforge.net/projects/guitarix/

regards
hermann
_______________________________________________
Linux-audio-announce mailing list
Linux-audio-announce@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-announce

read more

December 06, 2014 01:01 AM

[LAA] Yoshimi V 1.3.0

From: Will Godfrey <willgodfrey@...>
Subject: [LAA] Yoshimi V 1.3.0
Date: Dec 5, 11:15 pm 2014

There it is, all shiny and bright :)
http://sourceforge.net/projects/yoshimi

We have the usual crop of optimisations and (fewer) bugfixes.
There are further improvements to the GUI, including significant layout changes.
The most involved change has been the introduction of LV2 support - as our most
intrepid friends will know :)

This has been extensively tested on four completely different LV2 hosts, along
with detailed tests of Yoshimi in stand-alone mode to ensure there are no
regressions.

Supported features:
1. Sample-accurate midi timing.
2. State save/restore support via LV2_State_Interface.
3. Working UI support via LV2_External_UI_Widget.
4. Programs interface support via LV2_Programs_Interface.

Planned features:
1. Multi channel audio output. Now only 'outl' and 'outr' are routed to lv2
plugin instance. 2. Controls automation support. This will be a part of a
common controls interface.

--
Will J Godfrey
http://www.musically.me.uk
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.
_______________________________________________
Linux-audio-announce mailing list
Linux-audio-announce@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-announce

read more

December 06, 2014 01:01 AM

[LAA] ANN: Upcoming Fall DISIS Event

From: Ivica Ico Bukvic <ico@...>
Subject: [LAA] ANN: Upcoming Fall DISIS Event
Date: Dec 5, 11:07 pm 2014

This is a multi-part message in MIME format.
--------------010304090708090806040205
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit

Apologies for x-posting...

The School of Performing Arts and the Institute for Creativity, Arts, and Technology at Virginia Tech are pleased to announce

DISIS^3 (Digital Interactive Sound & Intermedia Studio) Event

Monday, December 1, 2014
7:30 pm
Cube, Moss Arts Center
free and open to the public

Featuring computer music by Virginia Tech faculty and students, guest violinist Sarah Plum, guest composer Elizabeth Hoffman, soprano Lee Heuermann and trombonist Jay Crone, the Linux Laptop Orchestra (L2Ork), animator Tamar Petersen, and a telematic performance with faculty and students at the University of Virginia, presented with the 128 speaker 3D surround sound system in the Cube of the Moss Arts Center.

Event poster: http://disis.icat.vt.edu/images/main/events/141201_poster.jpg

-----

The Cascades (2014), for multichannel computer music
Eric Lyon

Between (2013, revised 2014), for laptop orchestra and saxophone
Ivica Ico Bukvic

Brock Allen, saxophone

Linux Laptop Orchestra (L2Ork):
Brock Allen
Frehiwot Almente
Cody Cahoon
Rachel Gertler
Deborah Goldeen
Brandon Hale
Christian Kurmel
Paige Lopynski
Peter Nelson
Jocelyn Roman
Jacob Stenzel
Omavi Walker

Ivica Ico Bukvic, director

Half-Life (2009), for laptop orchestra and narrator
Ivica Ico Bukvic

Colleen Beard, narrator
L2Ork

Song Without Words #1 (2014), for improvising singer, computer processing, and laptop orchestra
Eric Lyon

Lee Heuermann, soprano
Eric Lyon, computer processing
L2Ork

Intermission

Il Prete Rosso, for amplified violin, motion sensor, interactive computer effects, and multichannel audio
Charles Nichols

Sarah Plum, violin
Charles Nichols, computer

Handshake Improvisation, for telematic improvisation

University of Virginia:
Lisa Cella, flute
Matthew Burtner, sax
Rachel Trapp, horn
Jeremy Muller, percussion

Virginia Tech:
Sarah Plum, violin
Charles Nichols, electric violin
Brock Allen, Ivica Ico Bukvic, and Christian Kurmel, L2Ork
Tamar Petersen, video

Troglodyte (2014) for computer, Leap Motion, and processed voice
Tanner Upthegrove

Tanner Upthegrove, Leap Motion and computer

Thirteen Ways to Leave Your Hexachord, for trombone and multi-channel electroacoustic sound,
and accompanying video
Elizabeth Hoffman

Jay Crone, trombone

Best,

--
Ivica Ico Bukvic, D.M.A.
Associate Professor
Computer Music
ICAT Senior Fellow
DISIS, L2Ork
Virginia Tech
School of Performing Arts

read more

December 06, 2014 12:01 AM

December 05, 2014

Libre Music Production - Articles, Tutorials and News

End of year survey - Have your say!

 
As 2014 comes to an end, we want to see what everyone’s favourite FLOSS audio software and plugins are. We intend on having an annual end of year survey, to see what's currently popular, what stands the test of time, and what people still feel is lacking in the Linux Audio world. 
 
If you are interested in participating, you can find the survey here

by Conor at December 05, 2014 08:38 PM

Guitarix bug-fix release. Update recommended!

The Guitarix devs have just pushed out a bug-fix release, version 0.32.1. This update is recommended and fixes a long outstanding issue with LADSPA/LV2 plugin load/unload and UI modification.

Latest downloads can be found at the Guitarix sourceforge page.

by Conor at December 05, 2014 09:55 AM

cSounds.com

2015 Csound Conference in St. Petersburg, Russia

Screen Shot 2014-12-05 at 1.07.09 AM

ICSC 2015

The 3rd International Csound Conference

2-4 OCTOBER 2015, ST.PETERSBURG, RUSSIA

Join us for The 3 rd International Csound Conference!
Three days of concerts, papers, workshops and meetings.
Get in touch with the global community of composers, sound designers and programmers.

 The Conference Will Be Held At

The Bonch-Bruevich St.Petersburg
State University of Telecommunications

The region biggest training and scientific complex specialized in information technologies and communications

Contact

For any questions, please contact us by email: icsc2015@yahoo.com

Conference Chair

Gleb RogozinskyScreen Shot 2014-12-05 at 1.07.09 AM

by Richard Boulanger at December 05, 2014 06:09 AM

December 04, 2014

Recent changes to blog

Guitarix 0.32.1 Bug-fix release

Bug-fix release 0.32.1 is out, update is recommended!!
This release fix a long outstanding issue with LADSPA/LV2 plugin load/unload and UI modification.

by brummer at December 04, 2014 07:00 AM

December 03, 2014

OpenAV

Dell Solution Center Visit

Dell Solution Center Visit

OpenAV visited the Dell Solution Center in Limerick this morning, they have some really awesome stuff going on there. Amazing tech being utilized and cool people running the show. At OpenAV we’re really excited about bench marking and stress testing some of the projects we’ve been working on in private, to check how well they scale. Based on the stuff we saw… Read more →

by harry at December 03, 2014 05:57 PM

Libre Music Production - Articles, Tutorials and News

LMP Asks #3: An interview with Aurélien Leblond

In this interview we talk to Aurélien Leblond of one man band, A Violent Whisper. Aurélien has ported Alsa Modular Synth modules over to the LV2 plugin format. He has also been hacking away on adding new features to Hydrogen

by Conor at December 03, 2014 11:08 AM

cSounds.com

CsoundQt 0.9.0 Released

CsoundQt090

This version includes:

  • A new virtual MIDI keyboard
  • Visual display of matching (or un-matching) parenthesis
  • Correct highlighting of type marks for functional opcodes (e.g. oscil:a)
  • Put back status bar
  • Added template list in file menu. Template directory can be set from the environment options
  • Added home and opcode buttons to help dock widget
  • Removed dependency on libsndfile (now using Csound’s perfThread record facilities
  • Fixed tab behavior
  • Updated version of Stria Synth (thanks to Emilio Giordani)
  • Dock help now searches as user types (not only when return is pressed)

Download HERE

 

 

by Richard Boulanger at December 03, 2014 08:51 AM

Csound 6.04 Released

Screen Shot 2014-12-03 at 3.45.11 AM

CSOUND VERSION 6.04 @ http://csound.github.io

Click HERE to download.

RELEASE NOTES   VERSION 6.04
============================

This new version has many extensions and fixes; many new opcodes

and significant numbers of internal reworking.  There is a new
frontend and iOS and Android version have seen many improvements.

As ever we track bugs and requests for enhancements via the github
issues system.  Already proposals for the next release are being
made but the volume of changes require a release now.

The Developers

USER-LEVEL CHANGES
==================

New opcodes:
o    pinker generates high quality pink noise

o    power opcode ^ now works with array arguments

o    exciter opcode, modelled on the calf plugin
,
o    vactrol opcode simulate an analog envelope follower

o    family of hdf5 opcodes to handle hdf5 format files

o    (experimental undocumented) buchla opcode models the lowgate
filter of Buchla

o    New k-rate opcodes acting on arrays:
–  transforms: rfft, rifft, fft, fftinv
–  complex product: complxprod
–  polar – rectangular conversion: rect2pol, pol2rect, mags, phs,
–  real – complex: r2c, c2r
–  windowing: window
–  cepstrum: pvsceps, iceps, ceps
–  column / row access: getrow, getcol, setrow, setcol
–  a-rate data – k-array copy: shiftin, shiftout
–  phase unwraping: unwrap

New Gen and Macros:

Orchestra:

o    Line numbers corrected in instr statements

o    New control operation, while, for looping

o    A long-standing bug with macros which use the same name for an
argument has been corrected

o    Redefinition of an instrument in a single call to compile is
flagged as an error

o    ID3 header skip for mp3 files now properly implemented.

o    Errors induced by not defining the location of STK’s raw wave
files has been removed

o    bug fixed where UDO’s could not read strings from pfields

o    bug fixed which hid tb opcodes at i-rate

o    Attempts to use two OSClisteners with the same port is now
trapped rather than give a segmentation fault
Score:

Options:

Modified Opcodes and Gens:

o    stackops opcodes deprecated

o    lenarray extended to handle multi-dimensional arrays

o    ftgenonce accepts string arguments correctly and multiple
string arguments

o    max and min now have initialisation-time versions

o    gen23 improved regarding comments and reporting problems

o    in OSCsend the port is now a k-rate value

o    socksend now works at k-rate

o    a number of envelope-generating opcodes are now correct in
sample-accurate mode

o    faust compilation is now lock-protected

o    mp3 fixed to allow reinit to be used with it.

o    In remote opcode the name of the network can be set via the
environment variable CS_NETWORK.  Defaults to en0 (OSX) or
eth0.

o    invalue, outvalue are available at i-rate as well as k-rate

Utilities:

Frontends:

icsound:
New frontend icsound is now ready for general use.  icsound is a
python interface for interactive work in the ipython notebook.

csound~:

Emscripten:

csdebugger:
A number of changes and improvements have been made, like
stepping through active instruments, better line number use

General usage:

Jack module now does not stop Csound if autoconnect fails

Bugs fixed:

o    atsinnoi fixed

o    ftsavek fixed

o    sprintf fixed

o    gen27 fixed, especially with extended arguments, as well as
fixed a number of errors in extended score arguments.

o    Physem opcodes (guiro cabasa, sekere) fixed so second call
works

o    flooper fixed in mode 2

o    OSCsend multiple fixes

o    UDO fix for case of local ksmps of 1

o    More changes/fixes to dssi code

o    xscanu and scanu fixed

o    temposcal and mincer fixed

o    crash in ftload fixed

====================
SYSTEM LEVEL CHANGES
====================

System changes:

o    In server mode exit is now clean

o    Fixes to rtalsa module

o    Pulseaudio rt module fixes

o    Fix to remove fluidEngine entries for csound instance
(prevents crash on moduleDestroy)

o    Opcodes called through function calls that returned arrays
did not correctly synthesize args as array types due to not
converting the arg specifier to the internal format

o    fixed crashing issue during note initialization for tied
notes due to goto skipping over code

o    fixed incorrect initialization of pfields when note’s pfields
length were less than instrument expected (off-by-one)

Internal changes:

* Added Runtime Type Identification for instrument variables;
removed use of XINCODE/XOUTCO

* fix malloc length in negative number parsing, and improved
handling of negative numbers

* writing to circularBuffer is now atomic

* a number of memory leaks and potential dangerous code have been
fixed

* type-inference has been extensively reworked, as have a few
parsing areas

API
===
* Added API function for retrieving GEN parameters used for
creating a table

Platform Specific
=================

iOS

* API Refactored for clearer method names and abstraction names (i.e.
CsoundBinding instead of CsoundValueCacheable)
* Updated to remove deprecated code
* A significant amount of reworking has been done on the code

Android
——-
* API Refactored for clearer method names and abstraction names (i.e.
CsoundBinding instead of CsoundValueCacheable)
* Changes to enable HTML 5 with JavaScript and it is to be hoped WebGL
in the Csound6 Android app.
* Enabled change of screen orientation in the Csound6 app without
forcing a restart of the app.
* Enabled local storage (useful for saving and restoring widget
values, etc.).

Windows
——-

* fixed pointer arithmetic that caused crashing on Windows

*  pyexec changed to use python’s file opening functions to prevent crash on
Windows

OSX

* CsoundAC now compiles

Linux
—–

* threadlocks bug fix on linux.

========================================================================

 

by Richard Boulanger at December 03, 2014 08:41 AM

AudioKit – an open source API using Csound

Screen Shot 2014-12-03 at 2.47.36 AM

AudioKit is an open source API to provide and easy and intuitive way to write audio applications entirely in Objective-C or Swift, using Csound as the sound engine; but not requiring one to write any orchestra or score files directly. The project is the result of collaboration between Adam Boulanger, Aurelius Prochazka and Nick Arner.  Here’s the web site: http://audiokit.io/

by Richard Boulanger at December 03, 2014 07:45 AM

The New Csound Site on GitHub

Screen Shot 2014-12-03 at 2.51.18 AM

For the latest news, the newest music, and the most current releases of Csound, check out the new Csound Site on GitHub @ http://csound.github.io/

 

 

by Richard Boulanger at December 03, 2014 07:24 AM

December 02, 2014

Create Digital Music » open-source

Make DJ Charts from Traktor, and More Free Playlist Tricks

traktor_stats

If DJing with vinyl leaves traces in our memory, recollections of physical handling of album sleeves and crates, then for digital DJing, we must rely on data. Traktor DJ is quietly noting everything you do as you play – at the gig, in the studio. The key is how to do something with that data.

The coolest trick came last month from our friend Tomash Ghz – he of the superb Digital Warrior, among others. (Very keen to get back to my desk in Berlin to muck about with the latest step sequencer there, but I digress.)

Tomash has whipped up a free tool that works out what music is at the top of your charts. It’s a tool for automation, to be sure – but it’s also an accurate window into what you’ve played. You can look by month, then see the top ten artists, labels, and tracks. It might encourage you to play more tracks in Traktor, even as you listen to music. And the code is all in Processing, meaning even an amateur coder/hacker can have a look and learn something – or make their own tool. (Visualizations, anyone?)

october

Have a look:
Traktor DJ Charts export

This illustrates something, too. By opting for plain-text, easily-readable data storage, Native Instruments opened the door to this kind of user mod without having to do any work or add support costs. This should be the norm in the industry, not the exception.

Here are some other tricks for working with playlists and history.

export

Explore the past. In addition to the History tab, you can find tracks you’ve played as long as you’ve used Traktor in Explorer > Archive. From there, you can generate new playlists and the like.

Export to Web pages. You can export playlists to webpage and not just Traktor’s own playlist format, which gives you a format you could easily edit for track listings on MixCloud and the like. Right-click on your playlist to choose the export option.

Takeaway playlists. Traktor on iPad and iPhone is a great way to mess about with playlists on the go. Native Instruments has a helpful tutorial on the topic.

I also have seen an ambitious tool called Advanced Playlist Editor on Windows, but it appears to be out of date.

What’s sorely missing in Traktor is the ability to fully sync playlists and history between iOS and desktop. That’s too bad. Even just as a journalist, I’d love to use Traktor on my phone when traveling as a way of selecting and charting music; as a DJ, doubly so.

So, I’d like to see NI develop this side of their tools, particularly as they have the leading desktop/mobile combination. And it’d be great to see all music software makers think more about how users can access data about what they’re doing. It’s essential for DJs, but could also be helpful for production.

For more detail on the topic, one of my favorite tutorials this year came from Dan White at DJ TechTools, covering both Serato and Traktor export options:
After The Show: Sharing Sets In Traktor + Serato DJ

Serato users, or users of other digital DJ tools, how do you track your playlists and history?

The post Make DJ Charts from Traktor, and More Free Playlist Tricks appeared first on Create Digital Music.

by Peter Kirn at December 02, 2014 03:57 PM

Hackaday » digital audio hacks

$15 Car Stereo Bluetooth Upgrade

We’ve seen all sorts of ways to implement Bluetooth connectivity on your car stereo, but [Tony’s] hack may be the cheapest and easiest way yet. The above-featured Bluetooth receiver is a measly $15 over at Amazon (actually $7.50 today—it’s Cyber Monday after all) and couldn’t be any more hacker-friendly. It features a headphone jack for plugging into your car’s AUX port and is powered via USB.

[Tony] didn’t want the receiver clunking around in the console, though, so he cracked it open and went about integrating it directly by soldering the appropriate USB pins to 5V and GND on the stereo. There was just one catch: the stereo had no AUX input. [Tony] needed to rig his own, so he hijacked the CD player’s left and right audio channels (read about it in his other post), which he then soldered to the audio output of the Bluetooth device. After shoving all the bits back into the dashboard, [Tony] just needed to fool his stereo into thinking a CD was playing, so he burned a disc with 10 hours of silence to spin while the tunes play wirelessly. Nice!


Filed under: car hacks, digital audio hacks

by Josh Marsh at December 02, 2014 03:01 AM

December 01, 2014

Create Digital Music » open-source

MeeBlip anode Adds Edgy Wavetables; Here’s How They Sound

anodewithmixer

We have reached a wonderful place. It’s a world where we no longer treat digital and analog as simplistically better or worse, but as techniques, as colors, a spectrum of tools for exploring sound.

Or to put it another way, we now make wild noises however we want.

And that’s very much how I feel about the direction we’ve gone with MeeBlip anode, combining digital waveforms with analog filtering, which is why I’m keen to share it here on CDM and not just via the MeeBlip site. The new 2.0 firmware comes with a selection of 16 wavetables, covering a range from glitchy to rich and sonorous – and raunchy and dirty, for sure.

I finally got to spend the weekend recording some new music with this, having played with it live, and made a little demo sequence in a free moment. (Thanks to online tool Splice for providing their office as a studio on the road here in New York.) I got to use the terrific standalone step sequencer in the Faderfox SC4. Add a USB dongle for that, and you have a terrifically-compact and mobile rig, by the way.

In the selection: single-cycle, fixed waveforms in 16-bit, covering blended saw, granular, FM, and some bit-reduced and distorted sounds.

The full list:

Wave Bank 1: Sawtooth, blended sawtooth, FM 1, distorted 1, granular 1, voice 1, voice 2
Wave Bank 2: Bit reduced 1, bit reduced 2, bit reduced 3, distorted 2, distorted 3, FM 2, FM 3, more granular.

To get at them, you now hold down the MIDI Select button on the back of the unit as you power it on. Then, instead of having the usual width and sweep controls for a rectangle wave, you turn the Width knob to select different wavetables, each with its own timbre. To access the full 16, you change bank using the Sweep switch.

I wanted to use UP UP DOWN DOWN LEFT RIGHT LEFT RIGHT B A, but that would be slightly impossible. Maybe on that 8-bit instrument we saw earlier today, huh?

All anodes shipping now have the new firmware; all existing anodes can be upgraded.

The new firmware is now up on our GitHub site and as always is open source, so you can look at how this is done. If you already own anode, we’ll have instructions on how to update your model. (I’m also hoping to organize some anode owner parties!) If you don’t, you have today to take advantage of a North American holiday sale online, or you can go to fine dealers worldwide:
http://meeblip.com/get-one/

The post MeeBlip anode Adds Edgy Wavetables; Here’s How They Sound appeared first on Create Digital Music.

by Peter Kirn at December 01, 2014 10:09 PM

Lo-Fi SES Looks Like a Game Controller, Plays Like a Chip Instrument

DSC_0108_blur

What if there were a hacky, hackable handheld game platform – just for making noises?

That’s what the Arduino-powered, Lo-Fi SES is all about. It’s basically a little 8-bit music toy, with a control layout borrowed from Nintendo of the past, but expandable, hackable, and open. The sound is very grungy and digital, but it all appears easy to play.

The cutest touch: you expand the board with “cartridges,” add-ons that connect to the top to add functionality. “One”Final Sound Adventure” adds more sounds. “USB: A Link to the Hack” lets you program the board from your computer, using Arduino (since it’s built on that platform). “Smasher Bros” is basically adjustable analog distortion circuitry to add to the output.

Of course, as an instrument, there’s not a whole lot you can play here – it’s limited to the game-style controllers. And you’ll get more compositional use out of a Game Boy combined with nanoloop or LSDJ, or another other mobile chip music platform, as there isn’t onboard sequencing. But that said, this does look to be a really fun all-in-one, standalone device for people to play with – if you just want to plug in headphones. And for people looking for a chippy platform to hack, it could be a dream. (The creator suggests making a rumble pack, for instance!) Hope to get my hands on one; in the meantime, we can watch the video and catch some pics and sounds.

On Kickstarter, with basic support starting at US$5 and hardware at US$65. And while I wouldn’t count on a crowd-funded campaign to ship on time to get under the tree, they are saying they’ll ship in December as quickly as possible. Crowd funding ends on December 10.

Lo-Fi SES: Hackable 8-bit chiptunes instrument

The project is the work of Austin-based Assorted Wires.

Check out the open source project on which this was based:
http://wiki.openmusiclabs.com/wiki/StompShieldaudio

DSC_0569

functionality

DSC_0782

DSC_0333

The post Lo-Fi SES Looks Like a Game Controller, Plays Like a Chip Instrument appeared first on Create Digital Music.

by Peter Kirn at December 01, 2014 07:18 PM

November 29, 2014

aubio

Save the kiwis

A few months ago, Lukasz Tracewski explained his project on the aubio-user mailing list. Using aubio's onset detection from his custom Python code, he is able to process very large amounts of recordings from New Zealand wilderness to monitor the populations of kiwis and improve their protection.

Kiwi and its egg

A kiwi and its egg, by Shyamal (own work), CC-BY-3.0, via Wikimedia Commons

Here is how Lukasz describes the challenge:

The project is to support efforts in kiwi protection, famous flightless bird from New Zealand. We count their calls (and also determine gender) in each file and use this information to understand ecosystem health and how well protection efforts are working.

As you can imagine finding that there is something worth interest in a recording is crucial: if there are too many false positives, then noise reduction by spectral subtraction becomes inefficient. The other way risk is even greater: if bird call goes undetected, then it will be included in noise-only regions and then subtracted from a whole sample, effectively eliminating many possible candidates.

Processing time becomes important when there are truly many sample. As mentioned before, there are 10000 hours of recordings. To get them identified we will probably buy time on Amazon Web Services EC2 compute-optimized instance (something with 32 virtual CPUs). Time is money in this case, so keeping it short is crucial.

Interestingly, Lukasz found that the energy onset detection method, which often generates a lot of false positives on music signals, was, in his case, the one giving the best result for the detection of bird calls.

Find out more about Lukasz project on the Ornithokrites web page.

November 29, 2014 10:46 PM

Linux Audio Announcements - laa@linuxaudio.org

[LAA] Rivendell v2.10.2

From: Frederick Gleason <fredg@...>
Subject: [LAA] Rivendell v2.10.2
Date: Nov 29, 12:52 pm 2014

On behalf of the entire Rivendell development team, I'm pleased to announce the availability of Rivendell v2.10.2. Rivendell is a full-featured radio automation system targeted for use in professional broadcast environments. It is available under the GNU General Public License.

>From the NEWS file:

Changes:
RDImport Enhancements. Added '--clear-datetimes' and
'--clear-daypart-times' options to rdimport(1).

ELR Data. Added a column to allow ELR data to be seen when editing
logs in RDLogEdit.

HPI Fixes. Fixed a bug that caused ASI cards with only AES3 ports
to fail to detect those ports.

Various other bug fixes. See the ChangeLog for details.

Database Update:
This version of Rivendell uses database schema version 242, and will
automatically upgrade any earlier versions. To see the current schema
version prior to upgrade, see RDAdmin->SystemInfo.

As always, be sure to run RDAdmin immediately after upgrading to allow
any necessary changes to the database schema to be applied.

*** snip snip ***

Downloads, screenshots and further information can be found at http://www.rivendellaudio.org/.

Cheers!


|----------------------------------------------------------------------|
| Frederick F. Gleason, Jr. | Chief Developer |
| | Paravel Systems |
|----------------------------------------------------------------------|
| A room without books is like a body without a soul. |
| -- Cicero |
|----------------------------------------------------------------------|

_______________________________________________
Linux-audio-announce mailing list
Linux-audio-announce@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-announce

read more

November 29, 2014 01:00 PM

November 26, 2014

Arch Linux Pro Audio

Interview MOD : Arch based FX pedal!

After talking JazzyEagle about ArchAudio last month, we catch up with a recently Kickstarted project; the MOD! The MOD project brings the arch audio ecosystem to a hardware FX unit for use on stage.

Interview with MOD

What is the MOD project?

The MOD is a hardware audio plugin processor based on Arch Linux, JACK and the LV2 plugin standard. MOD devices are autonomous processing units that have an embedded ARM CPU running an Arch Linux based OS. Users can connect remotely to their MODs and assemble virtual pedalboards with the plugins. There is also a Social Network where users can exchange their pedalboards.

Editing a pedalboard using the HTML5 based interface

Awesome! So why did you choose Arch as the distro?

The choice of Arch Linux is based on a few key points. The assembly of the OS starting from a very minimal install and building the pieces on top of it based on features demand, using a very modular approach. The system philosophy is not targeted towards automation but to performance. Our system needs to be as lean possible, so easily customizing the install is necessary. The flexible and practical package management provides this ease to create and manage .tar.gz packages, and this is very valuable to the MOD.

Question: How can Arch developers get involved with MOD?

Both the plugins as the OS can have involvement from the community. All our code is open source and it is possible to run the MOD software in desktops machines. It is all hosted at www.github.com/portalmod.  We have an SDK to create the HTML plugin interfaces so DSP coders can publish their plugins with beautiful and appealing graphic interfaces.

Cool! Thanks for talking to ArchAudio.org. Where can interested readers find out more?

Our website http://portalmod.com has the details, theres a pre-order store online, and you can subscribe for email updates, twitter and facebook to keep up to date!

by harryhaaren at November 26, 2014 01:53 PM

November 25, 2014

OpenAV

OpenAV : Events Autumn 2014

OpenAV : Events Autumn 2014

In the past months, OpenAV has attended various events promoting open-source projects and collaboration. Since the LAC, we’ve attended the VLC Developer-Days, Science Hack Day Dublin, Open Developers Meetup Limerick, and the Startup Weekend Limerick. These events have given us fantastic experience in working with various new teams, gaining business experience, learning new skills in domains like video encoding, all while meeting fantastic people at… Read more →

by harry at November 25, 2014 06:51 PM

November 24, 2014

Linux Audio Announcements - laa@linuxaudio.org

[LAA] Qtractor 0.6.4 - The Baryon Throne beta release!

From: Rui Nuno Capela <rncbc@...>
Subject: [LAA] Qtractor 0.6.4 - The Baryon Throne beta release!
Date: Nov 24, 7:48 pm 2014

Aw, snap!

Wait, there's no big deal (nor chromium's) cosmic (mis)alignment
being announced here now. Rest calm. Sorry to delude some of you :)

But, there's one (notable) leap on this tiny side and part of the
universe... (or is it a multiverse? move along...) What I really want to
tell is all about this happening and none else's freaking business:

Qtractor 0.6.4 (baryon throne beta) is released!

Release highlights:
* Punch-in/out over loop-recording/take modes (NEW)
* Latch/momentary MIDI Controllers toggle mode (NEW)
* JACK client/port pretty-name (metadata) support (NEW)
* Custom style and color themes (NEW)
* Mixer strip multi-row layout (NEW)
* Muted audio tracks monitoring on playback (FIX)
* Clip fade-in/out resize on time-stretch (FIX)

As for the clueless (as if there's any):

Qtractor is an audio/MIDI multi-track sequencer application written
in C++ with the Qt4 framework. Target platform is Linux, where the Jack
Audio Connection Kit (JACK) for audio and the Advanced Linux Sound
Architecture (ALSA) for MIDI are the main infrastructures to evolve as a
fairly-featured Linux desktop audio workstation GUI, specially dedicated
to the personal home-studio.

Well, although this being yet another milestone--if one may call it
that way--it also makes it official (yes, deeply engraved in stone) and
definitive as a migration to Git can be, as for source-code control and
management (it's a dirty job I know, but someone has to do it, right?).
Nevermind. It's done.

Meanwhile, please, don't ever hesitate to ask whether any of the
above does affect you some way or another. Or maybe anything else, yay?
Indeed, the puzzled you feel, the better :)


Website:
http://qtractor.sourceforge.net

Project page:
http://sourceforge.net/projects/qtractor

Downloads:
http://sourceforge.net/projects/qtractor/files

- source tarball:
http://download.sourceforge.net/qtractor/qtractor-0.6.4.tar.gz

- source package (openSUSE 13.2):

http://download.sourceforge.net/qtractor/qtractor-0.6.4-14.rncbc.suse132.src.rpm

- binary packages (openSUSE 13.2):

http://download.sourceforge.net/qtractor/qtractor-0.6.4-14.rncbc.suse132.i586.rpm

http://download.sourceforge.net/qtractor/qtractor-0.6.4-14.rncbc.suse132.x86_84.rpm

- quick start guide & user manual (see also: the wiki):
http://download.sourceforge.net/qtractor/qtractor-0.5.x-user-manual.pdf

- wiki (help wanted!):
http://sourceforge.net/p/qtractor/wiki/

Weblog (upstream support):
http://www.rncbc.org

License:
Qtractor is free, open-source software, distributed under the terms
of the GNU General Public License (GPL) version 2 or later.

Change-log:
- Fixed some old loop-recording clip drawing glitches.
- Current assigned track/channel instrument definition names for MIDI
controllers, note keys, RPN and NRPN, are now in effect on the MIDI clip
editor drop-down lists, whether available.
- Clip/Take/Range... input dialog values are now properly sanitized as
long to prevent invalid take/folding ranges.
- Audio capture/export file type default now set to "wav".
- Extending punch-in/out over loop-recording/takes modes.
- Make audio tracks monitoring always flow while playback is rolling,
independently of their mute/solo state.
- Fixed undo/redo conversion of audio clip offsets under (automatic)
time-stretching eg. due on tempo changes. (ticket by Holger Marzen, thanks).
- Latch/momentary MIDI Controllers toggle mode introduced (a request by
AutoStatic aka. Jeremy Jongepier, thanks).
- JACK client/port p [message continues]

read more

November 24, 2014 08:00 PM

rncbc.org

Qtractor 0.6.4 - The Baryon Throne beta release!

Aw, snap!

Wait, there's no big deal (nor chromium's) cosmic (mis)alignment being announced here now. Rest calm. Sorry to delude some of you :)

But, there's one (notable) leap on this tiny side and part of the universe... (or is it a multiverse? move along...) What I really want to tell is all about this happening and none else's freaking business:

Qtractor 0.6.4 (baryon throne beta) is released!

Release highlights:

  • Punch-in/out over loop-recording/take modes (NEW)
  • Latch/momentary MIDI Controllers toggle mode (NEW)
  • JACK client/port pretty-name (metadata) support (NEW)
  • Custom style and color themes (NEW)
  • Mixer strip multi-row layout (NEW)
  • Muted audio tracks monitoring on playback (FIX)
  • Clip fade-in/out resize on time-stretch (FIX)

As for the clueless (as if there's any):

Qtractor is an audio/MIDI multi-track sequencer application written in C++ with the Qt4 framework. Target platform is Linux, where the Jack Audio Connection Kit (JACK) for audio and the Advanced Linux Sound Architecture (ALSA) for MIDI are the main infrastructures to evolve as a fairly-featured Linux desktop audio workstation GUI, specially dedicated to the personal home-studio.

Well, although this being yet another milestone--if one may call it that way--it also makes it official (yes, deeply engraved in stone) and definitive as a migration to Git can be, as for source-code control and management (it's a dirty job I know, but someone has to do it, right?). Nevermind. It's done.

Meanwhile, please, don't ever hesitate to ask whether any of the above does affect you some way or another. Or maybe anything else, yay? Indeed, the puzzled you feel, the better :)

Flattr this

Website:

http://qtractor.sourceforge.net

Project page:

http://sourceforge.net/projects/qtractor

Downloads:

License:

Qtractor is free, open-source software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Change-log:

  • Fixed some old loop-recording clip drawing glitches.
  • Current assigned track/channel instrument definition names for MIDI controllers, note keys, RPN and NRPN, are now in effect on the MIDI clip editor drop-down lists, whether available.
  • Clip/Take/Range... input dialog values are now properly sanitized as long to prevent invalid take/folding ranges.
  • Audio capture/export file type default now set to "wav".
  • Extending punch-in/out over loop-recording/takes modes.
  • Make audio tracks monitoring always flow while playback is rolling, independently of their mute/solo state.
  • Fixed undo/redo conversion of audio clip offsets under (automatic) time-stretching eg. due on tempo changes. (ticket by Holger Marzen, thanks).
  • Latch/momentary MIDI Controllers toggle mode introduced (a request by AutoStatic aka. Jeremy Jongepier, thanks).
  • JACK client/port pretty-name (metadata) support is being seamlessly introduced. (EXPERIMENTAL)
  • Audio frame/MIDI time drift correction is now an option on View/Options.../MIDI/Playback/Enable MIDI queue time drift correction.
  • Transport auto-backward feature now honoring last position playback was started.
  • Introducing brand new application user preferences on View/Options.../Display/Options/Custom style and color themes (eg. "KXStudio", by Filipe Coelho aka. falkTX).
  • Mixer widget gets automatic multi-row strip layout.
  • Clip fade-in/out now follows time-stretch resizing, via shift/ctrl+click and drag one of its edges.
  • Fixed a typo causing FTBFS when VST plug-in support is explicitly disabled (./configure --disable-vst).

Enjoy && Have plenty of fun.

by rncbc at November 24, 2014 06:30 PM

Libre Music Production - Articles, Tutorials and News

November 21, 2014

Linux Audio Announcements - laa@linuxaudio.org

[LAA] [LAU] [LAD] Guitarix 0.32.0 released

From: hermann meyer <brummer-@...>
Subject: [LAA] [LAU] [LAD] Guitarix 0.32.0 released
Date: Nov 21, 12:51 am 2014

This is a multi-part message in MIME format.
--------------070204070302030207090306
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

The Guitarix developers proudly present

Guitarix release 0.32.0

For the uninitiated, Guitarix is a tube amplifier simulation for
jack (Linux), with an additional mono and a stereo effect rack.
Guitarix includes a large list of plugins[*] and support LADSPA / LV2
plugs as well.

The guitarix engine is designed for LIVE usage, and feature ultra fast,
glitch and click free, preset switching, full Midi and/or remote
controllable (Web UI not included in the distributed tar ball).

Here is the " Ultimate Guide to Getting Started With Guitarix
"

This release fix the bug #16 "empty effect menu with clear-skin option",
add new tuning scales (19- and 31-TET) to guitarix and Gxtuner.lv2,
add Midi Clock and Jack Transport support to guitarix and move a couple
of controllers from
unit:ms|hz to unit:bpm, so that they could easy synced with the Midi
Beat Clock.
and introduce a new LV2 plug:
* GxMultiBandReverb

Please refer to our project page for more information:
http://guitarix.sourceforge.net/

Download Site:
http://sourceforge.net/projects/guitarix/

Forum:
http://guitarix.sourceforge.net/forum/

Please consider visiting our forum or leaving a message on
guitarix-developer@lists.sourceforge.net


regards
hermann

--------------070204070302030207090306
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit




charset=ISO-8859-1" />
charset=ISO-8859-1">



The Guitarix developers proudly present




Guitarix release 0.32.0





For the uninitiated, Guitarix is a tube amplifier simulation
for


jack (Linux), with an additional mono and a stereo effect
rack.

Guitarix includes a large list of plugins[*] and support LADSPA
/ LV2 plugs as well.



The guitarix engine is designed for LIVE usage, and feature
ultra fast, glitch and click free,  preset switching, full Midi
and/or remote controllable (Web UI not included in the
distributed tar ball).



Here is the "
href="http://libremusicproduction.com/articles/ultimate-guide-getting-started-guitarix"
rel="nofollow">Ultimate Guide to Getting Started With Guitarix
"



This release fix the bug #16 "empty effect menu with clear-skin
option",

add new tuning scales (19- and 31-TET) to guitarix and
Gxtuner.lv2,

add Midi Clock and Jack Transport support to guitarix and move a
couple of controllers from

unit:ms|hz to unit:bpm, so that they could easy synced with the
Midi Beat Clock.

and introduce a new LV2 plug:


                * GxMultiBandReverb



Please refer to our project page for more information:

November 21, 2014 10:00 PM

Create Digital Music » open-source

ROLI, Makers of Seaboard Instrument, Just Bought The Leading C++ Audio Framework

Processed with VSCOcam with j6 preset

Here’s some important news that might impact you – even though you may never have heard of either the instrument maker or know anything about code libraries. Bear with us. But an experimental instrument builder and design shop just acquired the most popular framework used by audio developers, a set of free and open source gems.

The film explaining the announcement:

First, there’s ROLI. Now, to most of us in the music world, ROLI are the Dalston, London firm that make an alternative keyboard called the Seaboard – a sort of newer cousin to the Haken Continuum Fingerboard that uses foam that you press with your fingers to add expression and bend pitches. But ROLI wants you to think of them as a design shop focused on interaction. So they don’t just say “we’re going to go make weird instruments”; whether you buy the pitch or not, they say they want nothing less than to transform all human machine interaction.

And yes, there’s a film for that, too. (Those of you playing the startup drinking game, fair warning: the words “design” and “artisanal” appear in the opening moments, so you could wind up a bit unhealthy.)

ROLI isn’t a company forged in the mold of most music manufacturers. They’re most definitely a startup. They have an in-house chef and a Wellness Manager, even before they’re widely shipping their product. That’s in stark contrast to the steady growth rate of traditional music gear makers. (I’ve seen both Native Instruments and Ableton put up charts that show linear hiring over a period of many years. Many other makers were bootstrapped.) The difference: record US$12.8 million in round A funding. And as the Wall Street Journal noted at the time, that comes at a time when some big players (Roland) are seeing diminished sales.

With that additional funding, ROLI are being more aggressive. If it pays off, it could transform the industry. (And I’d say that’s true even – maybe especially if – they manage to use a musical instrument as a gateway to other non-musical markets.)

Processed with VSCOcam with se3 preset

So, they’re buying JUCE. JUCE is an enormous library that allows developers to easily build audio plug-ins (VST, VST3, AU, RTAS and AAX are supported), design user interfaces quickly with standard libraries, and handle audio tasks (MIDI, sound, and so on), networking, data, and other tasks with ease. A million lines of code and modular support for these features give those developers a robust toolkit of building blocks, saving them the chore of reinventing the wheel.

Just as importantly, the results can run across desktop and mobile platforms – OS X, Windows, Linux, iOS, and Android all work out of the box.

I couldn’t get solid stats in time for this story on how many people use JUCE, but it’s a lot. KORG, Pioneer, AKAI, and Arturia are just a few names associated with it. You almost certainly have used JUCE-powered software, whether you’re aware of it or not.

But ROLI aren’t just acquiring a nifty code toolkit for audio plug-in makers. JUCE’s capabilities cover a range of non-audio tasks, too, and include an innovative real-time C++ compiler. And they acquire not just the code, but its creator Julian Storer, who will become Head of Software Architecture for ROLI.

What does this mean for JUCE? Well, in the short term, it means more investment. Previously the work of Jules alone, a solitary genius of C++, JUCE will now have a team of people contributing, say ROLI. They will add staff to focus on developing the framework as Jules is named “Editor-in-Chief” of JUCE – a sort of project lead / curator for the tools. For that reason, it’s hard to see this as anything but good news for JUCE developers, for the time being. In fact, Jules is very clear about his ongoing role – and not much changes:

And for the foreseeable future, it’s still going to be me who either writes or approves every line of code that gets into the library. I’m hoping that within a couple of years we’ll have a team of brilliant coders who are all pumping out code that perfectly matches the quality and style of the JUCE codebase. But in the meantime, I’ll be guarding the repository with an iron fist, and nothing will be released until I’ve checked, cleaned and approved it myself. But even in the short-term, by having a team behind the scenes working on features, and with me acting as more of an “editor-in-chief” role to ensure that the code meets the required standard, we’ll be able to be a lot more productive without losing the consistency and style that is so important to JUCE’s success.

Read his full letter on the JUCE forum.

JUCE’s source is already open – most modules are covered by the GPL (v2 or v3, depending). You therefore pay only if you want to release closed-source code (or, given Apple’s restrictions, anything for iOS); commercial licenses are not expensive.

The murkier question is actually how this will evolve at ROLI. The word I heard was immediately “ecosystem.” In the Apple-centered tech world, it seems, everything needs to have an SDK – even new rubber keyboards – so ROLI may hope to please its investors with the move. And that makes some practical sense, too. In order to communicate with software instruments, the Seaboard needs to send high-resolution expression data; ROLI use System Exclusive MIDI. It’s now a no-brainer to wrap that directly into JUCE’s library in the hopes plug-in and software instrument makers bite and add support. What’s interesting about this is that it might skirt the usual chicken and egg problem – if adding compatibility is easy enough, instrument makers (always fans of curiosities anyway) may add compatibility before the Seaboard has a big installed base.

In fact, that in turn could be good for makers of other alternative instruments, too; ROLI are working on standardizing on methods for this kind of data.

Of course, that still depends on people liking the Seaboard instrument. And ROLI say their ambitions don’t stop at futuristic pianos. CEO/Founder Roland Lamb (that’s no relation to Roland the Japanese musical instrument company) paints a broader picture:

“At ROLI, our larger vision is to reshape interaction. To do that, we need to transform every technological link from where people create inputs, to where computers create outputs. That’s a tall order, but acquiring and investing in JUCE is our most significant step towards this challenging goal since we invented and developed the Seaboard.“

Now, I am frequently in public appearances making just this argument, that musical instrument interaction can lead to innovative design solutions in other areas. But in this case, I don’t know what it means. Whatever ROLI is working on beyond the Seaboard, we haven’t seen it. At least as far as JUCE, they’re building competency and assets that enable human/hardware communication in the software frameworks themselves. We’ll just have to see how that’s applied.

In coming weeks, I hope to look a little closer at how the Seaboard and other similar devices handle communication, and whether there’s a chance to make that area smarter for a variety of hardware. And I’ll also let you know what more we learn about ROLI and JUCE.

If you’re a developer, there are ongoing JUCE meetups planned, so you can check out the framework yourself or meet other existing users. These aren’t limited to London – Paris and Helsinki were recent tour stops, with Berlin (in Ableton HQ, no less) upcoming.

JUCE Meetup

JUCE site

JUCE Acquisition Press Release

https://www.roli.com/

The post ROLI, Makers of Seaboard Instrument, Just Bought The Leading C++ Audio Framework appeared first on Create Digital Music.

by Peter Kirn at November 21, 2014 05:07 PM

OpenAV

MOD and OpenAV talks (flashback)

In October, towards the end of the MOD kickstarter campaign, Gianfranco from the MOD project and Harry from OpenAV talked about the MOD at Trinity College Dublin and the University of Limerick. We showcased the MOD to the audience: who were generally blown away by its capabilities and also its build-quality!

Apart from the talks, we discussed details of building plugins, and OpenAV demo-ed some projects that have not been released yet… These discussions led to great ideas of integration for MOD and OpenAV plugins. The developers at OpenAV have a MOD Quadra, and we are currently writing and testing plugins on the hardware. Expect more posts about the MOD!

Cheers, -Harry

PS: Some pictures of the MOD demos in TCD and UL.

mod_tcd

mod_ul

by harry at November 21, 2014 04:07 PM

Nothing Special

Common Synthesizer Envelope Types

The term "infographic" seems like such a marketing buzzword it makes me want to ignore whomever says it. But I'm all for the presentation of information in a logical and clear manner. As I was pondering on how to increase interest in my audio plugins I thought about my ADBSSR envelopes that I love in the cellular automaton synth. I wanted some way to present quickly both how to use them and why they offer advantage over traditional ADSR envelope generators.They do add complexity but there's really no way to add a swell as we horn players do all the time to a patch without it. It even can look exactly like an ADHR when the Breakpoint is set to 1. Anyway I thought a diagram with explanatory text would do the trick. So I made one:

Figure 1: Not an infographic
This isn't an infographic though. I don't really know why I think that word is so stupid. Probably just because I tend to hate on anything trendy and hyped. Anyhow. Now you know how I feel. This image is in the public domain though I would appreciate citation.

by Spencer (noreply@blogger.com) at November 21, 2014 03:39 PM

Making an LV2 plugin GUI (yes, in Inkscape)

Told you I'd be back.

So I made this pretty UI last post but I never really told you how to actually use it (I'm assuming you've read the previous 2 posts, this is the 3rd in the series). Since I'm "only a plugin developer," that's what I'm going to apply it to. Now I've been making audio plugins for longer than I can hold my breath, but I've never bothered to make one with a GUI. GUI coding seems so boring compared to DSP and it's so subjective (user: "that GUI is so unintuitive/natural/cluttered/inefficient/pretty/ugly/slow etc. etc....") and I actually like the idea of using your ears rather than a silly visual curve, but I can't deny, a pretty GUI does increase usership. Look at the Calf plugins...

Anyhow, regardless of whether its right or wrong I'm going to make GUIs (that are completely optional, you can always use the host generated UI). I think with the infamous cellular automaton synth I will actually be able to make it easier to use, so the GUI is justifiable, but other than that they're all eye candy, so why not make 'em sweet? So I'll draw them first, then worry about making them an actual UI. I've been trying to do this drawing-first strategy for years but once I started toying with svg2cairo I thought I might actually be able to do it this time. Actually as I'm writing this paragraph the ball is still up in the air, so it might not pan out, but I'm pretty confident by the time you read the last paragraph in this almost-tutorial I'll have a plugin with a GUI.

So lets rip into it:


One challenge I have is that I really don't like coding C++ much. I'm pretty much a C purist. So why didn't I use gtk? Well, cause it didn't have AVTK. Or ntk-fluid. With that fill-in-the-blank development style fluid lends to, I barely even notice that its C++ going on in back. Its a pretty quick process too. I had learned a fair bit of QT, but was forgoing that anyway, but with these new (to me) tools I had a head start and got to where I am relatively quickly (considering my qsvg widgets are now 3 years old and unfinished).

The other good news is that the DSP and UI are separate binaries and can have completely separate codebases, so I can still do the DSP in my preferred language. This forced separation is very good practice for realtime signal processing. DSP should be the top priority and should never ever ever have to wait for the GUI for anything.

But anyway, to make an LV2 plugin gui we'll need to add some extra .ttl stuff. So in manifest.ttl:
@prefix lv2:  <http://lv2plug.in/ns/lv2core#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix ui:   <http://lv2plug.in/ns/extensions/ui#> .
 
 <http://infamousplugins.sourceforge.net/plugs.html#stuck>
a lv2:Plugin, lv2:DelayPlugin ;
lv2:binary <stuck.so> ;
rdfs:seeAlso <stuck.ttl> .
 
<http://infamousplugins.sourceforge.net/plugs.html#stuck_ui>
        a ui:X11UI;
        ui:binary <stuckui.so>
        lv2:extensionData ui:idle; . 

Thats not a big departure from the no-UI version, but we'd better make a stuckui.so to back it up. We've got a .cxx and .h from ntk-fluid that we made in the previous 2 posts, but its not going to be enough. The callbacks need to do something. But what? Well, they will be passing values into the control ports of the plugin DSP somehow. OpenAVproductions genius Harry Haaren wrote a little tutorial on it. The thing is called a write function. Each port has an index assigned by the .ttl and the dsp source usually has an enum to keep these numbers labeled. So include (or copy) this enum in the UI code, declare an LV2UI_Write_Function and also an LV2UI_Controller that will get passed in as an argument to the function. Both of these will get initialized with arguments that get passed in from the host when the UI instantiate function is called. The idea is the LV2_Write_Function is a function pointer that will call something from the host that stuffs data into the port. You don't need to worry about how that function works, just feel comfort knowing that where ever that points, it'll take care of you. In a thread safe way even.

Another detail (that I forgot when I first posted this yesterday) is declaring that this plugin will use the UI you define in the manifest.ttl. What that means is in the stuck.ttl you add the ui extension and declare the STUCKURI as the UI for this plugin:
@prefix doap:  <http://usefulinc.com/ns/doap#> .
@prefix foaf: <http://xmlns.com/foaf/0.1/> .
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .

@prefix lv2: <http://lv2plug.in/ns/lv2core#> .
@prefix ui: <http://lv2plug.in/ns/extensions/ui#> .

<http://infamousplugins.sourceforge.net/plugs.html#stuck>
a lv2:Plugin, lv2:DelayPlugin ;
doap:name "the infamous stuck" ;
doap:maintainer [
foaf:name "Spencer Jackson" ;
foaf:homepage <http://infamousplugins.sourceforge.net> ;
foaf:mbox <ssjackson71@gmail.com> ;
] ;
lv2:requiredFeature <http://lv2plug.in/ns/ext/urid#map> ;
lv2:optionalFeature lv2:hardRTCapable ;
ui:ui <http://infamousplugins.sourceforge.net/plugs.html#stuck_ui> ;

lv2:port [
... 

So enough talk. Lets code.
For LV2 stuff we need an additional header. So in an extra code box (I used the window's):
#include "lv2/lv2plug.in/ns/extensions/ui/ui.h"


It will be convenient to share a single source file for which port is which index. That eliminates room for error if anything changes. So in an additional code box (the Aspect Group's since the window's are all full):
#include"stuck.h"
 
We also will need 2 additional members in our StuckUI class. Do this by adding 2 "declarations" in fltk. The code is:
LV2UI_Write_Function write_function;

and
LV2UI_Controller controller;

And finally in each callback add something along the lines of (i.e. for the Stick It! port):
write_function(controller,STICKIT,sizeof(float),0,&stickit->floatvalue);

This is calling the write function with the controller object, port number, "buffer" size (usually the size of float), protocol (usually 0, for float), and pointer to a "buffer" as arguments. So now when the button is clicked it will pass the new value on to the DSP in a threadsafe way. The official documentation of write functions is here. The floatvalue member of dials and buttons is part of ffffltk (which was introduced in the other parts of this series) which was added exclusively for LV2 plugins. Cause they always work in floats. Or in atoms, which is a whole other ball of wax. Really though, its really easy to do this as long as you keep it to simple float data like a drone gain.

Another important thing you must add to the fluid design is a function called void idle(). In this function add a code block that has these 2 lines:
Fl::check();
Fl::flush();


To help clarify everything here's a screenshot of ntk-fluid once I've done all this. Its actually a pretty good overview what we've done so far:


Possibly the biggest departure from what we've done previously is now the program will not be a stand-alone binary, but a library that has functions to get called by the host (just like in the DSP). This means some major changes in our stuck_ui_main.cxx code.

For the GUI the most important functions are the instantiation, cleanup, and port event. To use NTK/fltk/ffffltk you will need to use some lv2 extensions requiring another function called extension_data but we'll discuss it later. The instantiation is obviously where you create your window or widget and pass it back to the host, cleanup deallocates it, and the port event lets you update the GUI if the host changes a port (typically with automation). We'll present them here in reverse order since the instantiation with NTK ends up being the most complex. So port event is fairly straightforward:
void stuckUI_port_event(LV2UI_Handle ui, uint32_t port_index, uint32_t buffer_size, uint32_t format, const void * buffer)
{
    StuckUI *self = (StuckUI*)ui;
    if(!format)
    {
      float val = *(float*)buffer;
      switch(port_index)
      {
        case STICKIT:
          self->stickit->value((int)val);
      self->led->value((int)val);
      break;
        case DRONEGAIN:
          self->volume->value(val);
      break;
        case RELEASE:
          self->time->value(val);
      break;
      }
    }
}


The enlightening thing about doing a UI is that you get to see both sides of what the LV2 functions do. So just like in the widget callbacks you send a value through the write_function, this is like what the write function does on the other side, first you recast the handle as your UI object so you can access what you need, then make sure its passing the format you expect (0 for float, remember?). Then assign the data corresponding to the index to whatever the value is. This keeps your UI in sync if the host changes a value. Nice and easy.

Next up is the simplest: Cleanup:
void cleanup_stuckUI(LV2UI_Handle ui)
{
    StuckUI *self = (StuckUI*)ui;

    delete self;
}



No explanation necessary. So that leaves us with instantiation. This one is complex enough I'll give it to you piece by piece. So first off is the setup, checking that we have the right plugin (this is useful when you have a whole bundle of plugins sharing code), then dynamically allocating a UI object that will get returned as the plugin handle that all the other functions use, and declaring a few variables we'll need temporarily:
static LV2UI_Handle init_stuckUI(const struct _LV2UI_Descriptor * descriptor,
        const char * plugin_uri,
        const char * bundle_path,
        LV2UI_Write_Function write_function,
        LV2UI_Controller controller,
        LV2UI_Widget * widget,
        const LV2_Feature * const * features)
{
    if(strcmp(plugin_uri, STUCK_URI) != 0)
    {
        return 0;
    }

    StuckUI* self = new StuckUI();
    if(!self) return 0;
    LV2UI_Resize* resize = NULL;


Then we save the write_function and controller that got passed in from the host so that our widgets can use them in thier callbacks:
    self->controller = controller;
    self->write_function = write_function;


Next stop: checking features the host has. This is where using NTK makes it a bit more complicated. The host should pass in a handle for a parent window and we will be "embedding" our window into the parent. Another feature we will be hoping the host has is a resize feature that lets us tell the host what size the window for our plugin should be. So we cycle through the features and when one of them matches what we're looking for we temporarily store the data associated with that feature as necessary:
    void* parentXwindow = 0;
    for (int i = 0; features[i]; ++i)
    {
        if (!strcmp(features[i]->URI, LV2_UI__parent))
    {
           parentXwindow = features[i]->data;
        }
    else if (!strcmp(features[i]->URI, LV2_UI__resize))
    {
           resize = (LV2UI_Resize*)features[i]->data;
        }
    }


Now we go ahead and startup our UI window, call the resize function with our UI's width and height as arguments and call a special NTK function called fl_embed() to set our window into the parent window. It seems this function was created specially for NTK. I haven't found it in the fltk source or documentation so I really don't know much about it or how you'd do it using fltk instead of NTK. But it works. (You can see the NTK source and just copy that function).  Once that's done we return our instance of the plugin UI object:
    self->ui = self->show();
    fl_open_display();
    // set host to change size of the window
    if (resize)
    {
       resize->ui_resize(resize->handle, self->ui->w(), self->ui->h());
    }
    fl_embed( self->ui,(Window)parentXwindow);

    return (LV2UI_Handle)self;
}


Ok. Any survivors? No? Well I'll just keep talking to myself then. We mentioned the extension_data function. This function gets called and can do various special functions if the host supports them. Similar to the port event, the same extension_data function gets called with different indexed functions and we can return a pointer to a function that does what we want when an extension we care about gets called. Once again we get to see both sides of a function we called. The resize stuff we did in instantiate can be used as a host feature like we did before or as extension data. As extension data you can resize your UI object according to whatever size the host requests. This extension isn't necessary for an NTK GUI but since the parent window we embedded our UI into is a basic X window, its not going to know to call our fltk resize functions when its resized.

In contrast, a crucial extension for an NTK GUI is the idle function. Because similarly the X window doesn't know anything about fltk and will never ask it to redraw when something changes. So this LV2 extension exists for the host to call a function that will check if something needs to get updated and redrawn on the screen. We made an idle function already to call in our StuckUI object through fluid, but we need to set up the stuff to call it. Our extension_data function will need some local functions to call:
static int
idle(LV2UI_Handle handle)
{
  StuckUI* self = (StuckUI*)handle;
  self->idle();
 
  return 0;
}

static int
resize_func(LV2UI_Feature_Handle handle, int w, int h)
{
  StuckUI* self = (StuckUI*)handle;
  self->ui->size(w,h);
 
  return 0;
}



Hopefully its obvious what they are doing. The LV2 spec has some stucts that are designed to interface  between these functions and the extension_data function, so we declare those structs as static constants, outside of any function, with pointers to the local functions :
static const LV2UI_Idle_Interface idle_iface = { idle };
static const LV2UI_Resize resize_ui = { 0, resize_func };


And now we are finally ready to see the extension_data function:
static const void*
extension_data(const char* uri)
{
  if (!strcmp(uri, LV2_UI__idleInterface))
  {
    return &idle_iface;
  }
  if (!strcmp(uri, LV2_UI__resize))
  {
    return &resize_ui;
  }
  return NULL;
}

You see we just check the URI to know if the host is calling the extension_data function for an extension that we care about. Then if it is we pass back the struct corresponding to that extension. The host will know how these structs are formed and use them to call the functions to redraw or resize our GUI when it thinks its necessary. We aren't really guaranteed timing for these but most hosts are gracious enough to call it at a frequency that gives pretty smooth operation. Thanks hosts!


So, its now time for the ugly truth to rear its head. Full disclosure: this implementation of the resizing extension code doesn't work at all. The official documentation describes this feature as being 2 way, host to plugin or plugin to host. We've already used it as plugin to host and that works perfectly, but when trying to go the other way I can't get it to work. The trouble is when we declare and initialize the LV2UI_Resize object. The first member of the struct is type LV2UI_Feature_Handle which is really just a void* which should really just be a pointer to whatever data the plugin will want to use when the function in the 2nd member of the struct gets called. Well for us when resize_func gets called we want our instance of the StuckUI that we created in init_stuckUI(). That would allow us to call the resize function. But we can't because its out of scope, and the struct must be a constant so it can't be assigned in the instantiate function. So I just have a 0 as that first argument and actually have the call to size() commented out.

Perhaps there's a way to do it, but I can't figure it out. I included that information because I hope to figure out how and someday make my UI completely resizable. The best way to find out, I figure, is to post fallacious information on the Internet and pretty soon those commenters will come tell me how wrong and stupid I am. Then I can fix it.

As a workaround you can put in your manifest.ttl this line:
lv2:optionalFeature ui:noUserResize ;

Which will at least make it not stupidly sit there the same size all the time even when the window is resized. If the host supports it.

"So if its not even resizable why in the world did you drag us through 3 long detailed posts on how to make LV2  GUIs out of SCALABLE vector graphics?!" you ask. Well, you can still make perfectly scalable guis for standalone programs, and just having a WYSIWYG method of customized UI design is hopefully worth something to you. It is to me, though I really hope to make it resizable soon. It will be nice to be able to enlarge a UI and see all the pretty details, then as you get familiar with it shrink it down so you can just use the controls without needing to read the text. Its all about screen real estate. And tiling window managers for me.


So importantlyin LV2 we need to have a standard function that passes to the host all these functions so the host can call them as necessary. Similar to the DSP side you declare a descriptor which is really a standard struct that has the URI and function pointers to everything:
static const LV2UI_Descriptor stuckUI_descriptor = {
    STUCKUI_URI,
    init_stuckUI,
    cleanup_stuckUI,
    stuckUI_port_event,
    extension_data
};


And lastly the function that passes it back. Its form seems silly for a single plugin, but once again you can have a plugin bundle (or a bundle of UIs) sharing source that passes the correct descriptor for whichever plugin is requested (by index). It looks like this:
LV2_SYMBOL_EXPORT
const LV2UI_Descriptor* lv2ui_descriptor(uint32_t index)
{
    switch (index) {
    case 0:
        return &stuckUI_descriptor;
    default:
        return NULL;
    }
}


As a quick recap, here are the steps to go from Inkscape to Carla (or your favorite LV2 plugin host):
1. Draw a Gui in Inkscape
2. Save the widgets as separate svg files
3. Convert to cairo code header files
4. Edit the draw functions to animate dials, buttons, etc. as necessary.
5. Create the GUI in ntk-fluid with the widgets placed according to your inkscape drawing
6. Include the ffffltk.h and use ffffltk:: widgets
7. Assign them their respective draw_functions() and callbacks
8. Add the write_function, controller members, and the idle() function
9. Export the source files from fluid and write a ui_main.cxx
10. Update your ttl
11. Compile, install, and load in your favorite host.

Our plugin in Jalv.gtk


So you now have the know-how to create your own LV2 plugin GUIs using Inkscape, svg2cairo, ffffltk, ntk-fluid, and your favorite editor. In 11 "easy" steps. You can see the source for the infamous Stuck that I developed this workflow through in my infamous repository. And soon all the plugins will be ffffltk examples. I'll probably refine the process and maybe I'll post about it. Feel free to ask questions. I'll answer to the best of my ability. Enjoy and good luck.

As an aside, in order to do this project. I ended up switching build systems. Qmake worked well, but I mostly just copied the script from Rui's synthv1 source and edited it for each plugin. Once I started needing to customize it more to generate separate dsp and ui binaries I had a hard time. I mostly arbitrarily decided to go with cmake. The fact that drmr had a great cmake file to start from was a big plus. And the example waf file I saw freaked me out so I didn't use waf. I guess I don't know python as much as I thought. Cmake seemed more like a functional programming language, even if it is a new syntax. I was surprised that in more or less a day I was able to get cmake doing exactly what I wanted. I had to fight with it to get it to install where I wanted (read: obstinate learner), but now its ready for whatever plugins I can throw at it. So that's what I'm going to use going forward. I'll probably leave the .pro files for qmake so if you want to build without a GUI you can. But maybe I won't. Complain loudly in the comments if you have an opinion.

by Spencer (noreply@blogger.com) at November 21, 2014 01:57 PM

OpenAV

New site online!

Hey!

OpenAV has been relatively quiet the last few weeks: not for lack of things happening here though! This new site has been designed, and OpenAV has attended a lot of events to promote linux-audio and the OpenAV software.

Now that the new site is in place, expect to see regular updates of the status of OpenAV, and some changes in terms of how software will be released, financed, and shared with the community.

We live in exciting times! Cheers, -OpenAV

by harry at November 21, 2014 01:33 PM

Hackaday » digital audio hacks

Speaker Cabinet Boom Box Build

When you get that itch to build something, it’s difficult to stop unless you achieve a feeling of accomplishment. And that’s how it was with [Rohit’s] boombox build.

He started out with a failing stereo. He figured he could build a replacement himself that played digital media but his attempts at mating microcontrollers and SD cards was thwarted. His backup plan was to hit DX for a cheap player and he was not disappointed. The faceplate he found has slots for USB and SD card, 7-segment displays for feedback, and both buttons and a remote for control. But this little player is meant to feed an amplifier. Why buy one when you can build one?

[Rohit] chose ST Micro’s little AMP called the TDA2030 in a Pentawatt package (this name for a zig-zag in-line package is new to us). We couldn’t find stocked chips from the usual suspects but there are distributors with singles in the $3.50-5 range. [Rohit] tried running it without a heat sink and it gets hot fast! If anyone has opinions on this choice of chip (or alternatives) we’d love to hear them.

But we digress. With an amp taken care of he moved onto sourcing speakers. A bit of repair work on an upright set got them working again. The bulky speaker box has more than enough room for the amp and front-end, both of which are pretty tiny. The result is a standalone music player that he can be proud of having hacked it together himself.


Filed under: digital audio hacks

by Mike Szczys at November 21, 2014 09:01 AM

November 20, 2014

Create Digital Music » Linux

Bitwig Studio 1.1 Adds Lots of Details; Can It Escape Ableton’s Shadow?

bitwig_opener

Bitwig Studio has been quietly plugging along in development, adding loads of engineering improvements under the hood. Version 1.1 is the largest update yet.

Here’s the summary of the update:
https://www.bitwig.com/en/bitwig_1up

Minus the marketing speak, the exhaustive changelog (here, for Mac): http://www.bitwig.com/dl/8/mac

It’s an impressively long list of enhancements in quantity, though most of the changes are fixes and enhanced hardware and plug-in compatibility. For instance, you can side-chain VSTs, and there are new options for routing multiband effects and multi-channel plug-ins.

The big enhancements:

  • More routing for audio and MIDI
  • VST multi-out sidechain support and multi-channel effect hosts
  • Updated controller API
  • New Audio Receiver, Note Receiver, Note MOD, De-Esser devices

And you can genuinely deactivate devices to save CPU, something Live lacks, as well as take advantage of “true latency compensation.” (Whatever that means – that will require some testing. Bitwig’s explanation of what makes their tech different is that it actually works. That sounds good.) Some other features play catch-up with Ableton Live – tap tempo and crossfader, modulation and timestretching. But it’s a welcome update.

And as we’ve tangled recently with Ableton Live’s spotty controller support and the weird gymnastics required to make controllers work, it’s worth scolding Ableton for not making their hardware integration work better. Bitwig, with a sliver of the development resources and very little incentive for hardware makers to add support, is quickly adding controller support simply because it’s easier to do. This could be a model for Ableton, particularly as its user base and the diversity of hardware for it continue to expand.

If you’re on desktop Linux (yes, I’m sure someone is out there), the choice is easy: Bitwig is a terrific, fun piece of software with lots of rather nice effects and instruments. It’s fast and ready to go out of the box. And there isn’t much else native on Linux that can say that (Renoise springs to mind, but it has a very different workflow).

The problem is, if you’re not on Linux, I still can’t work out a reason I’d recommend Bitwig Studio over other tools. And, of course, the elephant in the room is Ableton Live. I reviewed Bitwig Studio for Keyboard, and found plenty to like. But the problem was, Bitwig Studio has competition, and as I wrote for that magazine, to me it comes a bit too close to Live to be able to differentiate itself:

While Bitwig Studio improves upon Live’s editing functionality, it replicates even some of Live’s shortcomings: There’s no surround audio support, nor any track comping facility…

Compared to Ableton Live Standard, Bitwig Studio’s offerings are fairly comparable. But at that price, Ableton gives you 11GB of sound content, more complete plug-in support, more extensive routing, more controller compatibility, and video support.

Since writing that review, two of these has changed. Controller compatibility is a narrowing advantage for Ableton because of Bitwig’s superb scripting facility and aggressive hardware support. And routing MIDI between tracks has been fixed, which combined with the new modular devices, allows for more flexible routing in Bitwig than in Ableton in certain cases.

The problem is, if you want a change from Live, you likely want software that works differently (Cubase and the like for traditional DAWs, Maschine for drum machine workflows, Renoise for a tracker, and so on). If you want a Live-style workflow, you’re likely to choose Ableton Live.

You can read my whole review for Keyboard and see if you reach a different conclusion, though:

Bitwig Studio reviewed [Keyboard Magazine]

And as I’ve seen a handful of people start to use Bitwig, I’d be curious to hear from you: what was the deal maker that convinced you to switch? What is Bitwig offering you that rivals don’t?

The DAW market remains a competitive one, and it’s clear there’s always room for choice. Bitwig’s development pace at least continues moving forward. But I’ll keep repeating: I’d like to see this tool stray from its rivals.

And for me, the main thing is: once that review was done, I found myself returning to Ableton Live and finishing tracks, and not Bitwig Studio – even if I sometimes cursed Live’s shortcomings. Even if that is simply force of habit, it seems I’ll need more to kick that habit. And, unfortunately, you can’t judge software based on its forthcoming features.

Update: I’ve heard from some fairly vocal Bitwig users (well, I did ask). Some of them I can’t parse into specific feedback or use cases (“it’s just better” wasn’t what I was hoping for). But I have heard three themes, apart from Linux use, wheree, as I said, Bitwig Studio is a no-brainer:

1. Dynamic routing. Because routing is more flexible, and can operate dynamically, some of you are using Bitwig Studio as a kind of modular sound design environment. It seems to me this advantage would become more radical if Bitwig can ship their promised forthcoming open modular environment – then, it’s a whole different game, as that tool is integrated with the DAW rather than being grafted on top as with Max for Live. But I do see a use case here.

2. Workflow/usability with sessions. I found that the ability to open multiple projects at once and to have side-by-side session (clip) and arrangement views made less of an impact in my work than I expected. But to some of you, it’s important. Now, in my case, I otherwise found Bitwig’s UI more rigid than Live’s. They don’t look identical, though, and that becomes a matter of taste.

3. Performance. Live can be sluggish at certain tasks; Bitwig has a new from-scratch engine and operations like opening projects is definitely snappier.

Combine this with Bitwig Studio’s suite of effects and instruments – though it has to stack up against Live gems like Simpler, Operator, and physical modelling instruments, for instance. This wouldn’t convince me to switch, but at least it provides and insight into those who have. Keep the feedback coming.

The post Bitwig Studio 1.1 Adds Lots of Details; Can It Escape Ableton’s Shadow? appeared first on Create Digital Music.

by Peter Kirn at November 20, 2014 04:40 PM

Libre Music Production - Articles, Tutorials and News

Guitarix 0.32.0 released

The Guitarix developers have just announced the release of Guitarix version 0.32.0. For the uninitiated, Guitarix is a tube amplifier simulation for JACK, with an additional mono and a stereo effect rack. Guitarix includes a large list of plugins and also supports LADSPA/LV2 plugins.
 
This release fixes some bugs. It adds new tuning scales (19- and 31-TET) to guitarix and Gxtuner.lv2 and also includes a new LV2 plugin, GxMultiBandReverb.
 

by Conor at November 20, 2014 03:50 PM

Recent changes to blog

Guitarix 0.32.0 released

The Guitarix developers proudly present

Guitarix release 0.32.0

alternate text

For the uninitiated, Guitarix is a tube amplifier simulation for
jack (Linux), with an additional mono and a stereo effect rack.
Guitarix includes a large list of plugins[*] and support LADSPA / LV2 plugs as well.

The guitarix engine is designed for LIVE usage, and feature ultra fast, glitch and click free, preset switching, full Midi and/or remote controllable (Web UI not included in the distributed tar ball).

Here is the "Ultimate Guide to Getting Started With Guitarix"

This release fix the bug #16 "empty effect menu with clear-skin option",
add new tuning scales (19- and 31-TET) to guitarix and Gxtuner.lv2,
add Midi Clock and Jack Transport support to guitarix and move a couple of controllers from
unit:ms|hz to unit:bpm, so that they could easy synced with the Midi Beat Clock.
and introduce a new LV2 plug:
* GxMultiBandReverb

Please refer to our project page for more information:
http://guitarix.sourceforge.net/

Download Site:
http://sourceforge.net/projects/guitarix/

Forum:
http://guitarix.sourceforge.net/forum/

Please consider visiting our forum or leaving a message on
guitarix-developer@lists.sourceforge.net

regards
hermann

by brummer at November 20, 2014 03:47 PM

November 19, 2014

Create Digital Music » open-source

Meet KORG’s New Sample Sequencing volca – And its SDK for Sampling

volcasample

The KORG volca sample is here – and it’s more open than we thought.

We’ve seen KORG’s affordable, compact, battery-powered volca formula applied to synths (BASS and KEYS) and a drum machine (BEATS). I’m especially partial to the booming kick of the BASS, the sound of the KEYS (which despite the name also works as a bass synth), and the clever touch sequencing interface.

Well, now having teased the newest addition to the family, we’re learning about the details of the KORG sample. It’s not a sampler per se – there’s no mic or audio input – but what KORG calls a “sample sequencer.”

We’ll have a unit in to test soon, but my impression is that sample sequencing isn’t a bad thing at all. Sequencing has always been a strong suit for the volca, and here, it’s the main story. Every parameter of a sample is ready to step sequence, from the way the sample is sliced, to its playback speed and amplitude envelope, to pitch.

Additional features:

  • Reverse samples
  • Per-part reverb (ooh)
  • Active step / step jump (for editing steps)
  • “A frequency isolator, which has become a powerful tool in the creation of numerous electronic genres.” Or, um, to make that understandable, there are treble and bass analog filters.
  • Swing
  • Song mode – 16 patterns x 6 songs

That leaves only how to get samples into the volca sample, beyond the 100 samples already built in.

It has exactly the same complement of jacks on the top as the synth and drum machine volcas – sync signal in and out, MIDI in, headphone out, and … nothing else. So, instead, KORG wants you to use an iOS handheld to record samples first. You transfer them into the unit via one of the sync jacks. Initially, that came as a bit of a shock, and judging by comments, at least some of you readers didn’t like the decision much. Frankly, looking at the unit, it looks like like there just wasn’t room; KORG dedicated the jacks to their usual function and used up the whole panel on sampling and sequencing controls.

volcasampletop

Since then, though, we’ve had two developments that might get your interest back.

First, we’ve seen the iOS app, and it looks really cool. Brace yourself for cute video of designer Tatsuya Takahashi’s kid!

Okay, so the transfer process is a bit of a pain, but cutting samples on the iPhone is convenient, since you can see what you’re doing. It also solves the problem of needing to have a mic handy.

Here’s the surprise second development: KORG is releasing a free SDK for talking to the volca sample:

http://korginc.github.io/volcasample/

Basically, the volca sample’s trick is to encode binary data as audio signal, in the same way dial-up modems once did. (The technique is QAM – quadrature amplitude modulation – in case you’re interested.) The SDK helps you encode that data yourself. The software gives you several features:

1. You can encode audio samples to transfer – individually or as an entire 16-step sequence.
2. You can manage samples on the sample (delete them individually or delete all of them).

The SDK and library is written in C, but that means it could be used just about anywhere. I expect an Android app from a volca lover will be one of the first applications. It doesn’t have to stop there, though. You could build interesting sample-generating desktop apps – the KORG site suggests possibilities:

“Auto-slice a song to generate a sample set?
Turn photos of patterns into sequences?
algorithmic sample music generator?
generate random sequence from quantum effects?”

connect

And, oh yeah, you could even make your own sampling hardware with the library, though… if you’re savvy enough to do that, you might just go ahead and make your own sampling hardware.

Speaking of your own hardware, unfortunately there isn’t any decode capability, though I don’t see why someone couldn’t make their own. (QAM decoding is already something that’s widely available.)

What you get in the SDK source:

  • The “syro” library, the bit that does the encoding.
  • A project sample with examples, ready to build with gcc, clang, and Visual Studio 2010 or later.
  • Definitions for patterns.
  • Factory preset / reset data.

So, if someone wants to make a bare-bones sample project for the iOS SDK or Android SDK, for instance, let us know!

The whole project is covered under a BSD license, so highly permissive. Have a look, developers (or, um, Android users who aren’t developers, keep your fingers crossed, start buying beers and nice Christmas presents for your Android dev friends, whatever):

http://korginc.github.io/volcasample/documentation.html

https://github.com/korginc/volcasample

volca sample is shipping it seems in small quantities, but isn’t yet widely available. Stay tuned.

http://www.korg.com/us/products/dj/volca_sample/

Specs:

This is the heart of this beast – sequencing, sequencing, sequencing of … everything, actually, as the list below is identical to the list of sample parameters.

Parameters that can be used with Motion Sequence:
・ Start Point (Playback start location)
・ Length (Playback length)
・ Hi Cut (Cutoff frequency)
・ Speed (Playback speed)
・ Pitch EG Int (Pitch EG depth)
・ Pitch EG Attack (Pitch EG attack time)
・ Pitch EG Decay (Pitch EG Decay time)
・ Amp Level
・ Pan
・ Amp EG Attack (Amp EG Attack time)
・ Amp EG Decay (Amp EG Decay time)

And full specs:

100 sample slots (you can overwrite these)
4 MB (65 seconds) sample memory total (of course, divided across those 100 slots)
31.25 kHZ, 16 bit
Digital Reverb
Analogue Isolator
10 parts, 16 steps
Sync In (3.5mm monaural mini jack, Maximum input level: 20V)
Sync Out(3.5mm monaural mini jack, Maximum Output level: 5V)
MIDI IN
10 hour estimated battery life on 6 AA batteries or optional 9V AC adapter
372 g / 13.12 oz. (Excluding batteries)
193 × 115 ×45 mm / 7.60” x 4.53” x 1.77”

The post Meet KORG’s New Sample Sequencing volca – And its SDK for Sampling appeared first on Create Digital Music.

by Peter Kirn at November 19, 2014 11:24 AM

Hackaday » digital audio hacks

‘Nutclough’ Circuit Board Design is Stylishly Amplified

Though there is nothing wrong with the raw functionality of a plain rectangular PCB, boards that work an edge of aesthetic flare into their layout leave a lasting impression on those who see them. This is the philosophy of circuit artist [Saar Drimer] of Boldport, and the reason why he was commissioned by Calrec Audio to create the look for their anniversary edition amplifier kit. We’ve seen project’s by [Saar] before and this ‘Nutclough18’ amplifier is another great example of his artistic handy work.

nutclough2For the special occasion of their 50th anniversary, Calrec Audio contacted [Saar] requesting he create something a bit more enticing than their standard rectangular design from previous years. With their schematic as a starting point, [Saar] used cardboard to mock-up a few of his ideas in order to get a feel for the placement of the components. Several renditions later, [Saar] decided to implement the exact proportions of the company’s iconic Apollo desk into the heart of the design as an added nod back to the company itself. In the negative space between the lines of the Apollo desk there is a small perforated piece depicting the mill where the Calrec offices are located. The image of the mill makes use of different combinations of copper, silk and solder mask either absent or present to create shading and depth as the light passes through the board. This small piece that would have otherwise been removed as scrap can be snapped off from the body of the PCB and used as a commemorative keychain.

With the battery and speaker mounted behind the completed circuit board, [Saar’s] design succeeds in being a unique memento with a stylish appeal. There is a complete case study with detailed documentation on the Nutclough from cardboard to product on the Boldport website. Here you can also see some other examples of their gorgeous circuit art, or checkout their opensource software to help in designing your own alternative PCBs.


Filed under: digital audio hacks

by Sarah Petkus at November 19, 2014 03:01 AM

November 18, 2014

aubio

Bitwig ships aubio, and shouldn't

On March 27th, the first public release of Bitwig, a digital audio workstation often compared to Ableton Live, was announced. Bitwig had made a bit of noise for the past few years as a promising software for music composers and producers.

That same day, a good friend of mine gave me the news that vamp-aubio.so, a binary version of the Vamp plugin for aubio, was included in the first public demo of Bitwig, along with an old binary version of the entire aubio library.

It seems they decided to use aubio's onset detection to automatically slice their samples. Now, in the default configuration, aubio is not used. But Bitwig is a commercial software, and should not include GPL code in any sort of way.

After being asked, Bitwig publicly confirmed that they were not using [aubio] anymore, and just forgot to remove the file. I also wrote to them asking them to do so and received a mail from their side confirming they would do remove it as soon as possible.

Version 1.1 was released a few days ago, and still ships aubio's binary. From version 1.0.2 to version 1.1, all of them contained aubio's binary code. How could someone possibly forget to run rm -f in no less than eight months and several releases?

November 18, 2014 09:27 PM

rncbc.org

This old donkey's learning new languages...

Gitpretty slowly...

that is to say we're (gosh! I mean) I am (one has to be honest, ain't we? never mind, I say), moving all our (dang, my) projects source code control to Git.

Yeah, finally. Never is too late, somehow.

And the time is now, or is it?

Anyway, in the meanwhile, the official Subversion (SVN) repositories are moving no matter what,

the old paths,

http://svn.code.sf.net/p/PROJECTNAME/code/...

are from this moment now being migrated to:

http://svn.code.sf.net/p/PROJECTNAME/code_attic/...

and that's it.

Or in some old dead and wrong latin: ego dixit*, whatever ;)

nuff said.

* ps. the ego particle is the wrong one: in (old) latin, verbs don't get pronouns pre-pended, whatever again x)

[UPDATE:] the new respective Git repositories will be from now on referred as follows:

http://git.code.sf.net/p/PROJECTNAME/code

pps. regarding post caption: it's a literal translation to an old saying around here (pt_PT)--many should reckon it vaguely related to old dogs and new tricks ;)

by rncbc at November 18, 2014 06:30 PM

Create Digital Music » open-source

Hack Biology, Body, and Music: Open Call for MusicMakers Hacklab

hacklab1

For the past two winters, CDM has joined with Berlin’s CTM Festival to invite musical participants to grow beyond themselves. Working in freshly-composed collaborations, they’ve built new performances in a matter of days, then presented them to the world – as of last year, in a public, live show.

This year, they will work even more deeply inside themselves, finding the interfaces between body and music, biology and sound.

And that means we’re inviting everyone from choreographers to neuroscientists to apply, as much as musicians and code makers. Playing with the CTM theme of “Un Tune,” the project will this year encourage participants to imagine biology as sonic system, sound in its bodily effects, and otherwise connect embodiment to physical reality.

Joining me is Baja California-born Leslie Garcia, a terrific sound artist and maker who has already gone from participating in last year’s lab to organizing her own in her native Mexico. You can glimpse her below looking like a space sorceress of some kind, and hear the collaborative work she made last winter.

The 2014 hacklab's output, all wired up for the performance. Photo: CTM Festival.

The 2014 hacklab’s output, all wired up for the performance. Photo: CTM Festival.

We don’t know what people will propose or what meaning they will find out of that theme, but it might include stuff like this:

  • Human sensors (Galvanic Skin Response, EKG, EEG, eye movement, blood pressure, resporation and mechanomyogram or MMG)
  • Biofeedback systems
  • Movement sensors
  • Electrical stimulation
  • Aural and optical stimulation
  • Data sonification
  • Novel physical controllers
  • Dance performance, breathing techniques, and other physical practices

Or as CTM puts it, they will navigate “the spectrum between bio-acoustics, field recordings, ambient, flicker, brainwave entrainment, binaural beats, biofeedback, psychoacoustics, neo-psychedelia, hypnotic repetition, noise, and sub-bass vibrations” to both address and disturb the body.

And what I do know is, the most effective work will come out of new collaborations, new unexpected partnerships across fields – because that’s been my consistent experience with hacklabs past, as with the spatial sound project we did last month in Amsterdam or the CTM collaboration 2014, which I’ll document a little later this month.

Leslie is a great example of that. She initiated a collaboration with Stefano Testa at the hacklab in January/February. They produced, in a few short days, “Symphony for Small Machine” – exactly the sort of irreverent project we hoped would spring out of the week. Have a look:

http://lessnullvoid.cc/content/2014/02/symphony-for-small-machine-2/

And check out lots of other work talking to plants and bacteria and harnessing free and open source software:

http://lessnullvoid.cc/content/projects/

LeslieGarcia2

Apply to this year’s open call:

http://www.ctm-festival.de/festival-2015/transfer/musicmakers-hacklab/

December 12 is the deadline – yep, I know what a big part of my Christmas season will be about this year, and I couldn’t be more pleased.

And hope to see some of you in Berlin.

Follow the MusicMakers project on Facebook – I’ve been lax about updating this page, but will do so again!

The post Hack Biology, Body, and Music: Open Call for MusicMakers Hacklab appeared first on Create Digital Music.

by Peter Kirn at November 18, 2014 02:46 PM

November 17, 2014

Scores of Beauty

Restoring Deleted LilyPond Files

There are plenty of opportunities to delete important files, and even cases where you deliberately do it and only some time later notice that it was a stupid mistake. However, if you are using version control you’re lucky because that doesn’t make you worry at all. No need to hope that some undelete tool can be applied in time before something else irreversibly overwrites the bytes on disk etc.

Well, this is neither new nor specific to music editing, but I thought it a good idea to write a short post about it anyway. It will increase the chance that someone involved in music stumbles over the information, and it is yet another spotlight on the potential of versioned workflows for music editing.

Oops

In a previous post I explained that in our crowd engraving challenge to collaboratively engrave a huge orchestral score we delete files for segments of the music where instruments actually play nothing. Going through a part deleting such files makes you feel good because it’s the fastest way to proceed the “completion status” of our status reports. But I have to admit that it’s also a quite error-prone process where you can easily do too much and delete files which you shouldn’t. For example there are numerous segments that only contain a finishing note of a phrase, after which the instrument disappears in frenched (suppressed) staves. It’s easy to overlook that fragment … The next person who is then working on that part sill stumble over it and wonder why there is no empty segment template in which to enter the music.

Fortunately it is dead easy to restore that file, no matter how much work has been done in the meantime. Actually this involves only two steps.

Identifying Where the File Has Been Deleted

git log has plenty of options that you can use to start trying to pin down the problem. However, the seemingly natural approach won’t work:

.../das-trunkne-lied$ git log parts/clarinetI/01.ily
fatal: ambiguous argument 'parts/clarinetI/01.ily': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions

Well, it’s somewhat logical that you cannot retrieve the history of something that isn’t there anymore, isn’t it? But Git wouldn’t be Git if it wouldn’t assist you gracefully with such trivial tasks. And actually it tells you immediately what to do: use --. This will tell LilyPond that yes, it is a file you are looking for and asks it to look in its history if there isn’t such a file in the current working tree. You can even tell Git to return only the commit that deleted the file:

.../das-trunkne-lied$ git log --diff-filter=D -- parts/clarinetI/01.ily
commit 50e62d87cf2dc3e1b0c39812f0b001d85e388712
Author: Urs Liska <git@ursliska.de>
Date:   Tue Sep 30 14:19:15 2014 +0200
 
    clarinetI: Remove empty segments

Well, oops, it was myself. Good that it’s me who noticed ;-)

OK, now we have identified the commit where the file was deleted, but what next? Basically we need to find the commit that immediately precedes that one because that is the last commit where the interesting file had been present, so it also represents the file’s last state before deletion. To do this we first modify the previous request to only show the commit hash:

.../das-trunkne-lied$ git log --pretty=format:%H --diff-filter=D -- parts/clarinetI/01.ily
50e62d87cf2dc3e1b0c39812f0b001d85e388712

Using the $(...) notation we will now retrieve the Git log from that commit backwards, restricting it to the previous commit:

/das-trunkne-lied$ git log -1 --pretty=format:%H $(git log --pretty=format:%H --diff-filter=D -- parts/clarinetI/01.ily)~1
8da631a9d92129479867a2c876bbdd9fe1ccfbe0

What does this do?
Well, first we retrieve (as above) the commit hash deleting the file. This is enclosed in $(...) and used as input to yet another git command. Actually it’s the same git log command with two modifications: The first option is -1, restricting the output to just one commit while the ~1 at the end of the command tells Git to use “one commit before” the one we retrieved from inside the $(). So eventually our call returns the commit ID of the last commit where our file was present.

Restoring the File

Restoring the file is surprisingly simple:

.../das-trunkne-lied$ git checkout 8da631a9d92129479867a2c876bbdd9fe1ccfbe0 parts/clarinetI/01.ily
.../das-trunkne-lied$ git status
# On branch staging
# Changes to be committed:
#   (use "git reset HEAD <file>..." to unstage)
#
#   new file:   parts/clarinetI/01.ily

Why on earth does this work, isn’t git checkout used for switching between branches? Well, yes, but not exclusively. According the man page git checkout “Updates files in the working tree to match the version in the index or the specified tree”. When you use it for switching branches this is actually what it does. But you can also use it to “update” any file in the working tree to any state that can be referenced by a commit, branch or tag. So what we did was update parts/clarinetI/01.ily to the last state of the repository that contained that file, effectively restoring it from the Git history.

Now the file is present in the working tree and staged, so we can simply commit it and have it ready for music entry. I won’t do this now because the file I used for this example has actually been deleted correctly and I don’t want it to be restored. So I simply do what Git suggest: git reset HEAD parts/clarinetI/01.ily and then delete it from disk. This last exercise can be taken as yet another example for how easy your life can be with version control when you don’t have to worry about messing up your working directory through processes such as deleting or restoring arbitrary files.

Summary

Maybe this looks overly complicated and even daunting to someone not very familiar with Git. But basically you can work your way towards such a solution by building it step by step. There are many solutions out there that give you parts of the solution from which you can work your way onwards.

If we’d expect to have this problem regularly we could even go one step further and don’t call that last git checkout command manually but feed it with the output of the previously developed commands. But I think that would be overkill – because then we’d run into the 80/20 rule saying that the last 20% of a programming task require 80% of the time. I wouldn’t feel pretty safe having such a nested command that would try to restore a file with one call of, say, git-restore parts/clarinetI/01.ily without implementing quite some safety infrastructure.

by Urs Liska at November 17, 2014 06:12 PM

Libre Music Production - Articles, Tutorials and News

Non Mixer, now with LV2 support!

falkTX from KXStudio has just added LV2 support to Non Mixer. This is not upstream at the moment but is available in the KXStudio repositories. This will hopefully be accepted upstream and become part of Non Mixer. In the meantime, you can try out the LV2 supported version if you are using KXStudio by simply updating your system. 
 
 

by Conor at November 17, 2014 02:07 PM

November 14, 2014

ardour

Email issues (now resolved)

For about a day from 15:40 GMT on November 13th, all outbound email from ardour.org was being rejected by our outbound email server. The time on ardour.org had drifted more than 5 minutes from the email server's own time. I have corrected the time and installed an NTP client to ensure that this does not happen again.

If you were expecting email from ardour.org in connection with a new account, attempt to reset your password and you will get a new email.

If you were expecting email from ardour.org in connection with a download, please contact me (paul@linuxaudiosystems.com) and I'll get you the information you need.

Apologies for the problems.

read more

by paul at November 14, 2014 06:07 PM

November 13, 2014

Create Digital Music » Linux

Lemur is Now on Android, Supports Cabled Connections; You Want This Touch App

lemurlemur

Before there even was an iPad or iPhone, there was Lemur. The touch-based controller device was theoretically the first-ever consumer multi-touch hardware. Early adopters connected the pricey smart display via Ethernet to a computer, and wowed friends with flying faders and bouncing balls and new ways of doing everything from manipulating spatial audio to playing instruments.

Then, the iPad arrived, and Lemur had a new life as an iOS-only app. For many of us, it’s alone reason enough to own an Apple tablet.

But Apple tablets are pricey. Android tablets are cheap. And Android tablets are increasingly available in more sizes. So, maybe you want to run Lemur on Android. Maybe it’s your only tablet. Or maybe you’re just worried that now your live performance set depends on an iPad mini, and if it dies, you’re out hundreds more – so Android is an appealing backup.

Well, now, Lemur has come to Android. It wasn’t easy; it required lots of additional testing because of the variety of devices out there and weird peculiarities of making Android development work properly. (Disclosure: I was one of Lemur’s testers, and was gratified when it suddenly started working on my Nexus 7, which is a fairly excellent low-cost device.)

But now it’s here. And it’s fantastic. Nick from Liine came to our monthly mobile music app meetup in Berlin and showed us just how easy it is to code your own custom objects using the canvas – more on that soon. But combine that with a stable app for hosting your own creations, and Lemur is simply indispensable. It’s US$24.99 on the Google Play store.

Oh, and one more thing: wires.

Yes, sure, part of the appeal of tablets is wireless control. That allows you to walk around a performance venue, for instance, whilst controlling sounds and mixing. But in live situations, it sure is nice to avoid wifi connection problems and depend on a conventional wire. On both Android and iOS, this requires a special driver – at least if you want to connect directly via USB. But there’s already a free and open source Mac driver for Android, and it works really nicely with Lemur:

http://joshuawise.com/horndis

I am absolutely going to start carrying both my Nexus 7 and my iPad mini – I now never have to worry that one tablet will die or the iPad WiFi will decide to stop working int he middle of a show. I might even put them in different bags. You know – redundancy. And for Android lovers, this is great news. (They’ve been getting a handful of excellent apps lately, which, while nowhere near the iOS ecosystem, still mean you can get a lot of use out of an Android tablet. But that’s a story for another day.)

More on Lemur:

Lemur

And grab it from the Google Play store:

Lemur @ Google Play

The post Lemur is Now on Android, Supports Cabled Connections; You Want This Touch App appeared first on Create Digital Music.

by Peter Kirn at November 13, 2014 02:01 PM

Linux Audio Announcements - laa@linuxaudio.org

[LAA] AV Linux 6.0.4 'Diehard' Released!

From: MGV-AV Linux <info@...>
Subject: [LAA] AV Linux 6.0.4 'Diehard' Released!
Date: Nov 13, 12:55 pm 2014

Hello fellow Linux Audio peeps!

Apologies for this slightly delayed announcement, I wanted to sort out
some server issues before posting this ...

AV Linux 6.0.4 is released... before I get to the good stuff I want to
take a brief moment to say how thrilling it was to meet so many of the
luminaries in the Linux Audio universe at LAC 2014. It was truly the
experience of a lifetime and it is so great to remember your faces when
your various projects and release announcements come through on this
mailing list... AV Linux is merely a frame for your masterpieces and the
quality and depth of your art continues to amaze and inspire!

With sincere thanks, Glen MacArthur - AV Linux Maintainer

OK, on with it then

Full release announcement here:
http://www.remastersys.com/forums/index.php?topic=3474.0

Changelog from AV Linux 6.0.3-6.0.4:

Bugfixes:

- Updated Ardour3 builds to a special preview 3.5-3368 to prevent possible
MIDI/Audio data loss (thanks Paul Davis and Robin Gareus!)
- Reverted JACK 0.124.1 utility scripts to utilize A2JMIDID because
'alsa_midi' doesn't hotplug when new MIDI hardware is plugged in...
- Removed 'broadcom-sta-common' and it's blacklist file to facilitate more
Broadcom devices to work OOTB
- Added menu button to reload XFWM in 'Settings' Menu in the rare event it
crashes on login
- Disabled default Auto-mounting of HDD Partitions in a Live Session
(thanks Zensub!)
- Addition of Squeeze LTS Repositories for security updates including
patched BASH for the 'shellshock' bug
- Installer no longer offers non-English locales since they seem to be
broken, however non-English keyboards can still be set up.
- Replaced Iceweasel with Firefox and added 'Ubuntuzilla' repos for an
up-to-date browser option..

Updates:

- Updated to 3.12..19 lowlatency PAE default Kernel ( Thanks Trulan Martin!)
**An optional full RT Preempt Kernel is provided but is not compatible
with proprietary nVidia/Ati Video Drivers**
- Updated Harrison Mixbus Demo to version 2.5 including new LV2 Plugins
bundled systemwide to /usr/lib/lv2 (Thanks Ben Loftis!)
- Updated Adobe Flash Browser plugin for security fixes
- Updated AmSynth to 1.5.1 (Thanks Nick Dowell!)
- Updated Guitarix to 0.31(Thanks Hermann Meyer!)
- Updated OpenAV ArtyFX LV2 Plugins to 1.2 (thanks Harry for OpenAV updates!)
- Updated Luppp to 1.0.1
- Updated QmidiArp to 0.6.1
- Updated Qtractor to 0.6.3 (Thanks to Rui for all new Q-stuff below!)
- Updated Qjackctl to 0.3.12
- Updated Qmidinet to 0.2.0
- Updated VeeOne LV2 Plugins to 0.5.1
- Updated Carla Plugin Host to 2.0beta3b plus new VST Plugin (Thanks falkTX!)
- Updated LMMS to 1.0.95 + Carla Plugin host support... yep, LV2 synths in
LMMS! (Thanks diizy and falkTX!)
- Updated Drumgizmo LV2 to 0.9.6 (Thanks you wild and crazy Danes!)
- Updated LV2 Stack to most recent releases (Thanks drobilla!)
- Updated Xjadeo to 0.8.0 binary (Thanks Robin Gareus!)
- Updated HarVid binaries to 0.7.5 (Thanks Robin Gareus!)
- Updated Renoise Demo to 3.0.0
- Updated Pianoteq Demo to 5.1.1
- Updated Patchage to 1.0.0 (Thanks drobilla!)
- Updated Yoshimi to 1.2.4 (Thanks Will Godfrey!) *note as always banks
are in /usr/local/share/yoshimi
- Updated FFADO to 2.2.1
- Updated GCC and G++ to 4.7
- Updated libgtk2.0 from 2.20 to 2.24 to hopefully prolong ability to
compile GTK2 apps
- Updated Cinelerra-CV to recent GIT CVA build ...MAJOR IMPROVEMENTS!
(Thanks Paolo Rampino!)
- Updated entire LinuxSampler stack to recent SVN (Thanks rockhopper!)
- Udated LiVES Video Editor to 2.2.6 (Thanks salsaman!)
- Updated DISTRHO Plugins (many new improvements/additions... Thanks falkTX!)
- Upda [message continues]

read more

November 13, 2014 01:00 PM

Libre Music Production - Articles, Tutorials and News

Caps plugins, version 0.9.24 released

The popular LADSPA plugin suite has just seen a new release, version 0.9.24. This release includes "a number of bug fixes and improvements as well as changes intended to make it more suitable for low-power systems."
 
There is also a new/resurrected plugin, cabinetIII, which implements a "simplistic linear emulation of several guitar amplifier loudspeaker cabinets with reduced computational complexity".
 

by Conor at November 13, 2014 08:42 AM

November 12, 2014

Hackaday » digital audio hacks

Protocol Snooping Digital Audio

More and more clubs are going digital. When you go out to hear a band, they’re plugging into an ADC (analog-to-digital converter) box on stage, and the digitized audio data is transmitted to the mixing console over Ethernet. This saves the venue having to run many audio cables over long distances, but it’s a lot harder to hack on. So [Michael] trained popular network analysis tools on his ProCo Momentum gear to see just what the data looks like.

[Michael]’s writeup of the process is a little sparse, but he name-drops all the components you’d need to get the job done. First, he simply looks at the raw data using Wireshark. Once he figured out how the eight channels were split up, he used the command-line version (tshark) and a standard Unix command-line tool (cut) to pull the data apart. Now he’s got a text representation for eight channels of audio data.

Using xxd to convert the data from text to binary, he then played it using sox to see what it sounded like. No dice, yet. After a bit more trial and error, he realized that the data was unsigned, big-endian integers.  He tried again, and everything sounded good. Success!

While this is not a complete reverse-engineering tutorial like this one, we think that it hits the high points: using a bunch of the right tools and some good hunches to figure out an obscure protocol.


Filed under: digital audio hacks, Network Hacks

by Elliot Williams at November 12, 2014 09:01 PM

Libre Music Production - Articles, Tutorials and News

Modular set ups, concepts and practices, using Non Session Manager

What is Session Management?

Session management is used to solve the problems involved in managing modular set ups. Ultimately, it allows you to reopen multiple programs, their session files and instantly recall all connections between them.
 

by Conor at November 12, 2014 09:16 AM

November 11, 2014

Pid Eins

systemd For Administrators, Part XXI

Container Integration

Since a while containers have been one of the hot topics on Linux. Container managers such as libvirt-lxc, LXC or Docker are widely known and used these days. In this blog story I want to shed some light on systemd's integration points with container managers, to allow seamless management of services across container boundaries.

We'll focus on OS containers here, i.e. the case where an init system runs inside the container, and the container hence in most ways appears like an independent system of its own. Much of what I describe here is available on pretty much any container manager that implements the logic described here, including libvirt-lxc. However, to make things easy we'll focus on systemd-nspawn, the mini-container manager that is shipped with systemd itself. systemd-nspawn uses the same kernel interfaces as the other container managers, however is less flexible as it is designed to be a container manager that is as simple to use as possible and "just works", rather than trying to be a generic tool you can configure in every low-level detail. We use systemd-nspawn extensively when developing systemd.

Anyway, so let's get started with our run-through. Let's start by creating a Fedora container tree in a subdirectory:

# yum -y --releasever=20 --nogpg --installroot=/srv/mycontainer --disablerepo='*' --enablerepo=fedora install systemd passwd yum fedora-release vim-minimal

This downloads a minimal Fedora system and installs it in in /srv/mycontainer. This command line is Fedora-specific, but most distributions provide similar functionality in one way or another. The examples section in the systemd-nspawn(1) man page contains a list of the various command lines for other distribution.

We now have the new container installed, let's set an initial root password:

# systemd-nspawn -D /srv/mycontainer
Spawning container mycontainer on /srv/mycontainer
Press ^] three times within 1s to kill container.
-bash-4.2# passwd
Changing password for user root.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
-bash-4.2# ^D
Container mycontainer exited successfully.
#

We use systemd-nspawn here to get a shell in the container, and then use passwd to set the root password. After that the initial setup is done, hence let's boot it up and log in as root with our new password:

$ systemd-nspawn -D /srv/mycontainer -b
Spawning container mycontainer on /srv/mycontainer.
Press ^] three times within 1s to kill container.
systemd 208 running in system mode. (+PAM +LIBWRAP +AUDIT +SELINUX +IMA +SYSVINIT +LIBCRYPTSETUP +GCRYPT +ACL +XZ)
Detected virtualization 'systemd-nspawn'.

Welcome to Fedora 20 (Heisenbug)!

[  OK  ] Reached target Remote File Systems.
[  OK  ] Created slice Root Slice.
[  OK  ] Created slice User and Session Slice.
[  OK  ] Created slice System Slice.
[  OK  ] Created slice system-getty.slice.
[  OK  ] Reached target Slices.
[  OK  ] Listening on Delayed Shutdown Socket.
[  OK  ] Listening on /dev/initctl Compatibility Named Pipe.
[  OK  ] Listening on Journal Socket.
         Starting Journal Service...
[  OK  ] Started Journal Service.
[  OK  ] Reached target Paths.
         Mounting Debug File System...
         Mounting Configuration File System...
         Mounting FUSE Control File System...
         Starting Create static device nodes in /dev...
         Mounting POSIX Message Queue File System...
         Mounting Huge Pages File System...
[  OK  ] Reached target Encrypted Volumes.
[  OK  ] Reached target Swap.
         Mounting Temporary Directory...
         Starting Load/Save Random Seed...
[  OK  ] Mounted Configuration File System.
[  OK  ] Mounted FUSE Control File System.
[  OK  ] Mounted Temporary Directory.
[  OK  ] Mounted POSIX Message Queue File System.
[  OK  ] Mounted Debug File System.
[  OK  ] Mounted Huge Pages File System.
[  OK  ] Started Load/Save Random Seed.
[  OK  ] Started Create static device nodes in /dev.
[  OK  ] Reached target Local File Systems (Pre).
[  OK  ] Reached target Local File Systems.
         Starting Trigger Flushing of Journal to Persistent Storage...
         Starting Recreate Volatile Files and Directories...
[  OK  ] Started Recreate Volatile Files and Directories.
         Starting Update UTMP about System Reboot/Shutdown...
[  OK  ] Started Trigger Flushing of Journal to Persistent Storage.
[  OK  ] Started Update UTMP about System Reboot/Shutdown.
[  OK  ] Reached target System Initialization.
[  OK  ] Reached target Timers.
[  OK  ] Listening on D-Bus System Message Bus Socket.
[  OK  ] Reached target Sockets.
[  OK  ] Reached target Basic System.
         Starting Login Service...
         Starting Permit User Sessions...
         Starting D-Bus System Message Bus...
[  OK  ] Started D-Bus System Message Bus.
         Starting Cleanup of Temporary Directories...
[  OK  ] Started Cleanup of Temporary Directories.
[  OK  ] Started Permit User Sessions.
         Starting Console Getty...
[  OK  ] Started Console Getty.
[  OK  ] Reached target Login Prompts.
[  OK  ] Started Login Service.
[  OK  ] Reached target Multi-User System.
[  OK  ] Reached target Graphical Interface.

Fedora release 20 (Heisenbug)
Kernel 3.18.0-0.rc4.git0.1.fc22.x86_64 on an x86_64 (console)

mycontainer login: root
Password:
-bash-4.2#

Now we have everything ready to play around with the container integration of systemd. Let's have a look at the first tool, machinectl. When run without parameters it shows a list of all locally running containers:

$ machinectl
MACHINE                          CONTAINER SERVICE
mycontainer                      container nspawn

1 machines listed.

The "status" subcommand shows details about the container:

$ machinectl status mycontainer
mycontainer:
       Since: Mi 2014-11-12 16:47:19 CET; 51s ago
      Leader: 5374 (systemd)
     Service: nspawn; class container
        Root: /srv/mycontainer
     Address: 192.168.178.38
              10.36.6.162
              fd00::523f:56ff:fe00:4994
              fe80::523f:56ff:fe00:4994
          OS: Fedora 20 (Heisenbug)
        Unit: machine-mycontainer.scope
              ├─5374 /usr/lib/systemd/systemd
              └─system.slice
                ├─dbus.service
                │ └─5414 /bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-act...
                ├─systemd-journald.service
                │ └─5383 /usr/lib/systemd/systemd-journald
                ├─systemd-logind.service
                │ └─5411 /usr/lib/systemd/systemd-logind
                └─console-getty.service
                  └─5416 /sbin/agetty --noclear -s console 115200 38400 9600

With this we see some interesting information about the container, including its control group tree (with processes), IP addresses and root directory.

The "login" subcommand gets us a new login shell in the container:

# machinectl login mycontainer
Connected to container mycontainer. Press ^] three times within 1s to exit session.

Fedora release 20 (Heisenbug)
Kernel 3.18.0-0.rc4.git0.1.fc22.x86_64 on an x86_64 (pts/0)

mycontainer login:

The "reboot" subcommand reboots the container:

# machinectl reboot mycontainer

The "poweroff" subcommand powers the container off:

# machinectl poweroff mycontainer

So much about the machinectl tool. The tool knows a couple of more commands, please check the man page for details. Note again that even though we use systemd-nspawn as container manager here the concepts apply to any container manager that implements the logic described here, including libvirt-lxc for example.

machinectl is not the only tool that is useful in conjunction with containers. Many of systemd's own tools have been updated to explicitly support containers too! Let's try this (after starting the container up again first, repeating the systemd-nspawn command from above.):

# hostnamectl -M mycontainer set-hostname "wuff"

This uses hostnamectl(1) on the local container and sets its hostname.

Similar, many other tools have been updated for connecting to local containers. Here's systemctl(1)'s -M switch in action:

# systemctl -M mycontainer
UNIT                                 LOAD   ACTIVE SUB       DESCRIPTION
-.mount                              loaded active mounted   /
dev-hugepages.mount                  loaded active mounted   Huge Pages File System
dev-mqueue.mount                     loaded active mounted   POSIX Message Queue File System
proc-sys-kernel-random-boot_id.mount loaded active mounted   /proc/sys/kernel/random/boot_id
[...]
time-sync.target                     loaded active active    System Time Synchronized
timers.target                        loaded active active    Timers
systemd-tmpfiles-clean.timer         loaded active waiting   Daily Cleanup of Temporary Directories

LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

49 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.

As expected, this shows the list of active units on the specified container, not the host. (Output is shortened here, the blog story is already getting too long).

Let's use this to restart a service within our container:

# systemctl -M mycontainer restart systemd-resolved.service

systemctl has more container support though than just the -M switch. With the -r switch it shows the units running on the host, plus all units of all local, running containers:

# systemctl -r
UNIT                                        LOAD   ACTIVE SUB       DESCRIPTION
boot.automount                              loaded active waiting   EFI System Partition Automount
proc-sys-fs-binfmt_misc.automount           loaded active waiting   Arbitrary Executable File Formats File Syst
sys-devices-pci0000:00-0000:00:02.0-drm-card0-card0\x2dLVDS\x2d1-intel_backlight.device loaded active plugged   /sys/devices/pci0000:00/0000:00:02.0/drm/ca
[...]
timers.target                                                                                       loaded active active    Timers
mandb.timer                                                                                         loaded active waiting   Daily man-db cache update
systemd-tmpfiles-clean.timer                                                                        loaded active waiting   Daily Cleanup of Temporary Directories
mycontainer:-.mount                                                                                 loaded active mounted   /
mycontainer:dev-hugepages.mount                                                                     loaded active mounted   Huge Pages File System
mycontainer:dev-mqueue.mount                                                                        loaded active mounted   POSIX Message Queue File System
[...]
mycontainer:time-sync.target                                                                        loaded active active    System Time Synchronized
mycontainer:timers.target                                                                           loaded active active    Timers
mycontainer:systemd-tmpfiles-clean.timer                                                            loaded active waiting   Daily Cleanup of Temporary Directories

LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

191 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.

We can see here first the units of the host, then followed by the units of the one container we have currently running. The units of the containers are prefixed with the container name, and a colon (":"). (The output is shortened again for brevity's sake.)

The list-machines subcommand of systemctl shows a list of all running containers, inquiring the system managers within the containers about system state and health. More specifically it shows if containers are properly booted up, or if there are any failed services:

# systemctl list-machines
NAME         STATE   FAILED JOBS
delta (host) running      0    0
mycontainer  running      0    0
miau         degraded     1    0
waldi        running      0    0

4 machines listed.

To make things more interesting we have started two more containers in parallel. One of them has a failed service, which results in the machine state to be degraded.

Let's have a look at journalctl(1)'s container support. It too supports -M to show the logs of a specific container:

# journalctl -M mycontainer -n 8
Nov 12 16:51:13 wuff systemd[1]: Starting Graphical Interface.
Nov 12 16:51:13 wuff systemd[1]: Reached target Graphical Interface.
Nov 12 16:51:13 wuff systemd[1]: Starting Update UTMP about System Runlevel Changes...
Nov 12 16:51:13 wuff systemd[1]: Started Stop Read-Ahead Data Collection 10s After Completed Startup.
Nov 12 16:51:13 wuff systemd[1]: Started Update UTMP about System Runlevel Changes.
Nov 12 16:51:13 wuff systemd[1]: Startup finished in 399ms.
Nov 12 16:51:13 wuff sshd[35]: Server listening on 0.0.0.0 port 24.
Nov 12 16:51:13 wuff sshd[35]: Server listening on :: port 24.

However, it also supports -m to show the combined log stream of the host and all local containers:

# journalctl -m -e

(Let's skip the output here completely, I figure you can extrapolate how this looks.)

But it's not only systemd's own tools that understand container support these days, procps sports support for it, too:

# ps -eo pid,machine,args
 PID MACHINE                         COMMAND
   1 -                               /usr/lib/systemd/systemd --switched-root --system --deserialize 20
[...]
2915 -                               emacs contents/projects/containers.md
3403 -                               [kworker/u16:7]
3415 -                               [kworker/u16:9]
4501 -                               /usr/libexec/nm-vpnc-service
4519 -                               /usr/sbin/vpnc --non-inter --no-detach --pid-file /var/run/NetworkManager/nm-vpnc-bfda8671-f025-4812-a66b-362eb12e7f13.pid -
4749 -                               /usr/libexec/dconf-service
4980 -                               /usr/lib/systemd/systemd-resolved
5006 -                               /usr/lib64/firefox/firefox
5168 -                               [kworker/u16:0]
5192 -                               [kworker/u16:4]
5193 -                               [kworker/u16:5]
5497 -                               [kworker/u16:1]
5591 -                               [kworker/u16:8]
5711 -                               sudo -s
5715 -                               /bin/bash
5749 -                               /home/lennart/projects/systemd/systemd-nspawn -D /srv/mycontainer -b
5750 mycontainer                     /usr/lib/systemd/systemd
5799 mycontainer                     /usr/lib/systemd/systemd-journald
5862 mycontainer                     /usr/lib/systemd/systemd-logind
5863 mycontainer                     /bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
5868 mycontainer                     /sbin/agetty --noclear --keep-baud console 115200 38400 9600 vt102
5871 mycontainer                     /usr/sbin/sshd -D
6527 mycontainer                     /usr/lib/systemd/systemd-resolved
[...]

This shows a process list (shortened). The second column shows the container a process belongs to. All processes shown with "-" belong to the host itself.

But it doesn't stop there. The new "sd-bus" D-Bus client library we have been preparing in the systemd/kdbus context knows containers too. While you use sd_bus_open_system() to connect to your local host's system bus sd_bus_open_system_container() may be used to connect to the system bus of any local container, so that you can execute bus methods on it.

sd-login.h and machined's bus interface provide a number of APIs to add container support to other programs too. They support enumeration of containers as well as retrieving the machine name from a PID and similar.

systemd-networkd also has support for containers. When run inside a container it will by default run a DHCP client and IPv4LL on any veth network interface named host0 (this interface is special under the logic described here). When run on the host networkd will by default provide a DHCP server and IPv4LL on veth network interface named ve- followed by a container name.

Let's have a look at one last facet of systemd's container integration: the hook-up with the name service switch. Recent systemd versions contain a new NSS module nss-mymachines that make the names of all local containers resolvable via gethostbyname() and getaddrinfo(). This only applies to containers that run within their own network namespace. With the systemd-nspawn command shown above the the container shares the network configuration with the host however; hence let's restart the container, this time with a virtual veth network link between host and container:

# machinectl poweroff mycontainer
# systemd-nspawn -D /srv/mycontainer --network-veth -b

Now, (assuming that networkd is used in the container and outside) we can already ping the container using its name, due to the simple magic of nss-mymachines:

# ping mycontainer
PING mycontainer (10.0.0.2) 56(84) bytes of data.
64 bytes from mycontainer (10.0.0.2): icmp_seq=1 ttl=64 time=0.124 ms
64 bytes from mycontainer (10.0.0.2): icmp_seq=2 ttl=64 time=0.078 ms

Of course, name resolution not only works with ping, it works with all other tools that use libc gethostbyname() or getaddrinfo() too, among them venerable ssh.

And this is pretty much all I want to cover for now. We briefly touched a variety of integration points, and there's a lot more still if you look closely. We are working on even more container integration all the time, so expect more new features in this area with every systemd release.

Note that the whole machine concept is actually not limited to containers, but covers VMs too to a certain degree. However, the integration is not as close, as access to a VM's internals is not as easy as for containers, as it usually requires a network transport instead of allowing direct syscall access.

Anyway, I hope this is useful. For further details, please have a look at the linked man pages and other documentation.

by Lennart Poettering at November 11, 2014 11:00 PM

Nothing Special

FIX IS VERY SWEET!

One of my mission companions would say that a lot. He threatened (jokingly) to fix people. He was Tongan and barely spoke English or Filipino but we had fun and got some good work done.

Regardless of that, my Presonus Firebox had been making a strange whining sound for a while. I started a session a few weeks ago to finally begin a new song but (as usual) didn't get very far before fatherly duty called and the session sat for at least 2 weeks or so. This happens as regularly as I record which is roughly every few weeks. I eventually passed through the studio to grab something out of our food storage in the next room and realized that the firebox was completely dark. "Strange," I thought so I went and tried to restart JACK. The firebox wheezed and whined and eventually the blue LED slowly lit up. "Not good," I thought (I usually think very tersely), but I had to get that food upstairs and get dinner on for the kids.



I didn't read too much into it at first since I was pretty busy. But when I came back to it after a few days, I tried it again with the wall power. It powered up more quickly but still not anything like the instant blue light it used to bring up and it was still very audibly whining during operation. I tried plugging the firewire cable into the other ports on the computer and the interface. I tried changing some jack settings and eventually came to the discovery that it was making this audible noise even when nothing was plugged into it except AC power, not even attached to the computer. It sounded like some communication noise I've heard before, like a uart running at 9600 baud or something, but I reaffirmed that it wasn't a good thing. I had noticed it making this noise quietly for the last few months but now it was very noticeable. Playing with my condenser mic a little showed that whatever had changed raised my noise floor significantly.

The firebox works just fine in Linux using the FFADO drivers. It's not feature complete since I can't do the hardware monitoring that I'm pretty sure the firebox is capable of, but it's plenty good for me and my one man band recording methods. It has clean preamps (relative to the internal sound card of my laptop) and is useable at whatever rates I need (usually just 24bit 44.1khz though). And finding a new audio interface that works on linux is no small task. It was especially painful to think of needing to replace it because I'm finally about to have enough money in the budget to buy my first studio monitors, and even just meeting another guy in some middle school parking lot to buy a replacement interface for $60 again would threaten putting off the monitors for some time more.

So with heavy heart I pulled it apart this morning. Actually I had already fixed a broken chair I'd been meaning to get to and a tape measure that I dropped off the roof while re-shingling a few months ago, so I was on a roll. But I pulled it apart without too much trouble and tried to see what was going on. Luckily the damage was fairly noticeable. The insulation on the wire that connects the headphone jack to the upper PCB had melted to the cap on the lower board.

Most of the info I could find about problems with the firebox had a cap completely blow with some charring etc. This seemed much less dramatic and I was concerned that this one slightly marred cap wasn't fully the problem. But it was the best I had to go off of.
The damaged capacitor, Notice the melted plastic and the top seems slightly bubbled


I had a Nicholsen PW(M) cap that was 470uF but it was 25V instead of the 10V that the Chang cap I was replacing. The PW series aren't audio grade but I think the Chang wasn't either since this was near the power section anyway. I was glad to be upgrading the rated voltage of the cap, but this meant the new one was much larger. I had to get creative with the placement to keep it out of the way of the headphone jack and not touch the chassis or other components.


The new capacitor in place. You can also see the slightly melted insulation on the middle white headphone jack wire.


 The soldering was fairly trivial. Wick the old solder, pull the part, solder in the new one. I put some electrical tape on the headphone jack wires to help prevent them getting further compromised. I got lucky because I had left one of the leads just a bit too long and the first time I plugged it in they were shorting to the chassis, so when I plugged it in I got no power light at all. In retrospect I was lucky it didn't blow up something, but it was very disappointing. I took it all apart trying to figure out what had gone wrong and ohmed out all the transistors I could find to see if any had shorted.

When that proved fruitless I plugged it in without the chassis and it worked! I then just added one screw at a time and tested if it still turned on. Next screw. Test. Etc. When I got to the front part of the chassis attached, thats when I realized I didn't have good clearance on that lead of the new cap. I trimmed it and put the front on. Test. SUCCESS! I continued doing this through the rest of the assembly to be safe but in the end I had it running perfectly silently, fully assembled, there on the workbench.

I think this is the first time I've had a complex electronic item, took it apart, and was able to fix it without a schematic. It felt so awesome! Fixing the house, the chair, my remote control helicopter motor... mechanical issues are easy to diagnose and fix. Electronics though take either detailed documentation and knowledge, or a little luck. So when it works, fix is very sweet.

I took it home and just as a test hooked up my condenser, cranked the gain and added a simple amplifier plugin for more gain. Silence. I am back to my original noise floor! From now on I think I'll shut down my computer between sessions and use the AC power instead of just bus power.

Now I just need to actually make a recording again.

by Spencer (noreply@blogger.com) at November 11, 2014 12:57 PM

Hackaday » digital audio hacks

Teensys and Old Synth Chips, Together At Last

The ancient computers of yesteryear had hardware that’s hard to conceive of today; who would want a synthesizer on a chip when every computer made in the last 15 years has enough horsepower to synthesize sounds in software and output everything with CD quality audio? [Brian Peters] loves these old synth chips and decided to make them all work with a modern microcontroller.

Every major sound chip from the 80s is included in this roundup. The Commodore SID is there with a chip that includes working filters. The SN76489, the sound chip from the TI99 and BBC Micro are there, as is the TIA from the Atari consoles. Also featured is the Atari POKEY, found in the 8-bit Atari computers. The POKEY isn’t as popular as the SID, but it should be.

[Brian] connected all these chips up with Teensy 2.0 microcontrollers, and with the right software, was able to control these via MIDI. It’s a great way to listen to chiptunes the way they’re meant to be heard. You can check out some sound samples in the videos below.

Thanks [Wybren] for the tip.


Filed under: classic hacks, digital audio hacks

by Brian Benchoff at November 11, 2014 12:00 AM

November 10, 2014

Create Digital Music » open-source

Reviews Weigh in on Our MeeBlip anode Synth; Here’s What They Said

anodewood

MeeBlip anode, our ready-to-play bass synth with an analog filter, is now shipping and in dealers worldwide. We knew we wanted to make something that was accessible to those new to hardware synths, but had enough personality to surprise advanced users, too – even in a small box, for US$139.95 list.

And we also now know what the critics think.

It’s always easy to explain what you wanted a creation to be. It’s a different, if exciting, experience when you read someone else’s take on what resulted. But that makes me all the more pleased to share a round-up of reviews of the anode, reviews that we’ve found exceptionally thoughtful and thorough, that connect to what we were trying to do.

If you like what you read, anode is on sale now, including fine dealers worldwide.

anodekeybuy

Keyboard Magazine gave MeeBlip anode its Key Buy award (our second, following the first-generation MeeBlip), saying: “after a day in the studio it becomes clear that nothing else sounds like it.”

MeeBlip Anode reviewed [Keyboard]

white-void-rear-1

Resident Advisor recommended the anode to first-time synthesists and enthusiasts alike, and gave the synth high marks for having a unique sound:

“The Anode doesn’t sound like a Volca … It’s fatter and nastier, and it also feels like more of a staple. It has a grittier, arguably more analogue character than anything in its price range, and it’s simple yet proficient.”

RA Reviews: MeeBlip anode [Resident Advisor]

musicradar_fr

MusicRadar (or Future Music in print) reviewed anode, writing “If you’re itching to get your hands on some physical knobs and make a few filthy sounds, the anode is a fantastic buy.”

“The filter’s unique selling point is its phat, filthy sound, and MeeBlip have pulled a blinder in this department”

Reviews: MeeBlip anode [MusicRadar / Future Music]

You can read that review in French, if you prefer (hey, it sounds cooler):
MeeBlip anode: Un synthé open-source mono au son massif et saturé

And there’s an accompanying video review:

soundrecording

Here in Germany, Sound&Recording‘s Martha Plachetka (a talented synthesist, by the way), had generous words for us in German, by way of your neighborhood Späti / Kiosk / Bahnhof. “Kompakt, günstig, Open Source”:

Testbericht: MeeBlip anode im Test [musikmachen / Sound & Recording]

Ready to get your MeeBlip?

Most importantly, if these reviews have won you over, you can at last get your hands on MeeBlip easily and quickly. We are in stock and can ship quickly in North America, but we’re also in dealers near you, in North America, Australia, and Europe and the UK. That means if you’re ordering from Vienna or London, you can get rapid shipping from your favorite dealer, or you can walk into stores like Schneidersladen in Berlin, Robotspeak in San Francisco, or Control Voltage in Portland and pick one up. That was our original goal with anode, and seeing it come to fruition this fall is something we’re really thankful for, because we hate delays as much as you do.

Order direct and ship right away:
MeeBlip: Get One

Or via select dealers (and if your favorite dealer isn’t on this list and you want it there, call us and tell them to get in touch with us or ALEX4 in Europe:
Dealers Near You

Previously, more creative uses of anode from our friend Diego:

Transform Sounds for Free, with Tools Made with MeeBlip anode by Diego Stocco

And don’t forget, MeeBlip is open source hardware – ready to use right out of the box, but with code and schematics freely available on GitHub. We’re posting new updates there. A review of why that’s important:

Come and Git It: MeeBlip anode Circuits and Code, Open Source on GitHub

The post Reviews Weigh in on Our MeeBlip anode Synth; Here’s What They Said appeared first on Create Digital Music.

by Peter Kirn at November 10, 2014 07:22 PM

GStreamer News

GStreamer Core, Plugins and RTSP server 1.4.4 stable release

The GStreamer team is pleased to announce a bugfix release of the stable 1.4 release series. The 1.4 release series is adding new features on top of the 1.2 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework that contains new features. The 1.4.x bugfix releases only contain important bugfixes compared to 1.4.0.

Binaries for Android, iOS, Mac OS X and Windows are provided by the GStreamer project for this release. The Android binaries are now built with the r10c NDK and as such binary compatible again with all NDK and Android releases. Additionally now binaries for Android ARMv7 and Android X86 are provided. This binary release features the first 1.4 releases of GNonLin and the GStreamer Editing Services.

The 1.x series is a stable series targeted at end users. It is not API or ABI compatible with the 0.10.x series. It can, however, be installed in parallel with the 0.10.x series and will not affect an existing 0.10.x installation.

The stable 1.4.x release series is API and ABI compatible with 1.0.x and any other 1.x release series in the future. Compared to 1.0.x it contains some new features and more intrusive changes that were considered too risky as a bugfix.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, or gst-rtsp-server, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, or gst-libav, or gst-rtsp-server.

Check the release announcement mail for details and the release notes above for a list of changes.

Also available are binaries for Android, iOS, Mac OS X and Windows.

November 10, 2014 09:00 AM

Linux Audio Users & Musicians Video Blog

LMMS Promo Video

Check out this new promo video for the LMMS DAW.



by DJ Kotau at November 10, 2014 02:24 AM

Hydrogen Swing Test

A simple and effective demonstration of the swing feature in Hydrogen drum machine by Lorenzo Sutton.


Hydrogen swing test from Lorenzo Sutton on Vimeo.

by DJ Kotau at November 10, 2014 02:22 AM

November 09, 2014

Libre Music Production - Articles, Tutorials and News

Newsletter for November out now - Interviews, tutorials and LMP features in Linux Format

Our newsletter for November is now sent to our subscribers. If you have not yet subscribed, you can do that from our start page.

You can also read the latest issue online. In it you will find:

  • LMP article features in Linux Format magazine
  • Second installment of 'LMP Asks', with Hermann Meyer of the Guitarix project
  • New software demos
  • More tutorials
  • New software release announcements

and more!

by admin at November 09, 2014 06:02 PM