planet.linuxaudio.org

June 23, 2016

OSM podcast

rncbc.org

Qtractor 0.7.8 - The Snobby Graviton is out!


So it's first solstice'16...

The world sure is a harsh mistress... yeah, you read that right! Heinlein's Moon have been just intentionally rephrased. Yeah, whatever.

Just about when the UK vs. EU is there under close scrutiny and sizzling winds of trumpeting (pun intended, again) coming from the other side of the pond, we all should mark the days we're living in.

No worries: we still have some feeble but comforting news:

Qtractor 0.7.8 (snobby graviton) is out!

Nevertheless ;)

Qtractor is an audio/MIDI multi-track sequencer application written in C++ with the Qt framework. Target platform is Linux, where the Jack Audio Connection Kit (JACK) for audio and the Advanced Linux Sound Architecture (ALSA) for MIDI are the main infrastructures to evolve as a fairly-featured Linux desktop audio workstation GUI, specially dedicated to the personal home-studio.

Change-log:

  • MIDI file track names (and any other SMF META events) are now converted to and from the base ASCII/Latin-1 encoding, as much to prevent invalid SMF whenever non-Latin-1 UTF-8 encoded MIDI track names are given.
  • MIDI file tempo-map and location markers import/export is now hopefully corrected, after almost a decade in mistake, regarding MIDI resolution conversion, when different than current session's setting (TPQN, ticks-per-quarter-note aka. ticks-per-beat, etc.)
  • Introducing LV2 UI Show interface support for other types than Qt, Gtk, X11 and lv2_external_ui.
  • Prevent any visual updates while exporting (freewheeling) audio tracks that have at least one plugin activate state automation enabled for playback (as much for not showing messages like "QObject::connect: Cannot queue arguments of type 'QVector'"... anymore).
  • The common buses management dialog (View/Buses...) sees the superfluous Refresh button finally removed, while two new button commands take its place: (move) Up and Down.
  • LV2 plug-in Patch support has been added and LV2 plug-ins parameter properties manipulation is now accessible on the generic plug-in properties dialog.
  • Fixed a recently introduced bug, that rendered all but one plug-in instance to silence, affecting only DSSI plug-ins which implement DSSI_Descriptor::run_multiple_synths() eg. fluidsynth-dssi, hexter, etc.

Website:

http://qtractor.sourceforge.net

Project page:

http://sourceforge.net/projects/qtractor

Downloads:

http://sourceforge.net/projects/qtractor/files

Git repos:

http://git.code.sf.net/p/qtractor/code
https://github.com/rncbc/qtractor.git
https://gitlab.com/rncbc/qtractor.git
https://bitbucket.org/rncbc/qtractor.git

Wiki (on going, help stillwanted, always!):

http://sourceforge.net/p/qtractor/wiki/

License:

Qtractor is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Flattr this

 

Enjoy && Have (lots of) fun.

by rncbc at June 23, 2016 06:00 PM

Nothing Special

Room Treatment and Open Source Room Evaluation

Its hard to improve something you can't measure.

My studio space is much much too reverberant. This is not surprising since its a basement room with laminate flooring and virtually no soft, absorbant surfaces at all. I planned to add acoustic treatment from the get go, but funding made me wait until now. I've been recording doing DI guitars, drum samples, and synth programming, but nothing acoustic yet until the room gets tamed a little bit.



(note: I get pretty explanatory about why bass traps matter in the next several paragraphs. If you only care about the measurement stuff, skip to below the pictures.)

Well, how do we know what needs taming? First there are some rules of thumb. My room is about 13'x11'x7.5' which isn't an especially large space. This means that sound waves bouncing off the walls will have some strong resonances at 13', 11', and 7.5' wavelengths which equates to about 86Hz, 100Hz, and 150Hz respectively. There will be many more resonances, but these will be the strongest ones. These will become standing waves where the walls just bounce the acoustic energy back and forth and back and forth and back and forth... Not forever, but longer than the other frequencies in my music.

For my room, these are very much in the audible spectrum so this acoustic energy hanging around in the room will be covering other stuff I want to hear (for a few hundred extra ms) while mixing. In addition to these primary modes there will also be resonances at 2x, 3x, 4x, etc. of these frequencies. Typically the low end is where it tends to get harder to hear what's going on, but all the reflections add up to the total reverberance which is currently a bit too much for my recording.

Remember acoustic waves are switching (or waving even) between high pressure/low speed and low pressure/high speed. Where the high points lie depends on the wavelength (and the location of the sound source). At the boundaries of the room, the air carrying the primary modes' waves (theoretically) doesn't move at all. That means the pressure is the highest there. At the very middle of the room you have a point where air carrying these waves is moving the fastest. Of course the air is usually carrying lots of waves at the same time so how its moving/pressurized in the room is hard to predict exactly.

With large wavelengths like the ones we're most worried about, you aren't going to stop them with a 1" thick piece of foam hung on the wall (no matter how expensive it was). You need a longer space to act on the wave and trap more energy. With small rooms more or less the only option is through porous absorbers which basically take acoustic energy out of the room when air carrying the waves tries to move through the material of the treatment. Right against the wall air is not moving at all, so putting material there isn't going to be very effective for the standing waves. And only 1" of material isn't going to act on very much air. So you need volume of material and you need to put it in the right place.

Basically thicker is better to stop these low waves.  If you have sufficient space in your room put a floor-to-ceiling 6' deep bass trap. But most of us don't have that kind of space to give up. The thicker the panel the less dense of material you should use. Thick traps will also stop higher frequencies, so basically, just focus on the low stuff and the higher will be fine. Often if the trap is not in a direct reflecting point from the speaker then its advised to glue kraft paper to the material which bounces some of the ambient high end around the room so its not too dead. How dead is too dead? How much high end does each one bounce? I don't know. It's just a rule of thumb. The rule for depth is quarter wavelength. An 11' wave really will be stopped well by a 2.75' thick trap. This thickness guarantees that there will be some air moving somewhere through the trap even if you put it right in the null. Do you have a couple extra feet of space to give up all around the room? Me neither. But we'll come back to that. Also note that more surface area is more important than thickness. Once you've covered enough wall/floor/ceiling, then the next priority is thickness.

Next principle is placement. You can place treatment wherever you want in the room but some places are better than others. Right against the wall is ok because air is moving right up until the wall, but it will be better if there is a little gap, because the air is moving faster a little further from the wall. So we come back to the quarter wavelength rule. The most effective placement of a panel is spaced equal to its thickness. So a 3" panel is best 3" away from the wall. This effectively doubles the thickness of your panel. Thus we see placement and thickness are related. Now your 3" panel is acting like its 6" damping pretty effectively down to 24" waves (~563Hz). It also works well on all shorter waves. Bass traps are really broadband absorbers. But... 563Hz is a depressingly high frequency when we're worried about 80Hz. This trap will do SOMETHING to even 40Hz waves, but not a whole lot. What do we do if our 13' room mode is causing a really strong resonance?

You can move your trap further into the room. This makes it so there is a gap in the absorption curve, but it makes the absorption go lower. So move the 3" panel to have a 6" gap and you won't be as effective at absorbing 563Hz but now it works much better on 375Hz. You are creating a tuned trap. It still works some on 563Hz but the absorption curve will have a low point then a bump at 375. Angling the trap so the gap varies can help smooth this response making it absorb more frequencies, but less effectively for specific ones. So tradeoff smooth curve for really absorbing a lot of energy at a specific frequency if you need.

The numbers here are pretty thoretical. Even though the trap is tuned to a certain frequency a lot of other frequencies will get absorbed. Some waves will enter at angles which makes it seem thicker. Some waves will bounce off. Some waves will diffract (bend) around the trap somewhat. There are so many variables that its very difficult to predict acoustics precisely. But these rules of thumb are applicable in most cases.

Final thing to discuss is what material? Its best to find one that has been tested with published numbers because you have a good idea if and how it will work. Mineral wool is a fibrous material that resists air passing through. Fiberglass insulation can work too. Rigid fiberglass Owens Corning 703 is the standard choice but mineral wool is cheaper and just as effective so its becoming more popular. Both materials (and there are others) come in various densities, and the idea comes into play that thicker means less dense. This is because if it's too dense acoustic waves could bounce back out on their way through rather than be absorbed.

Man. I didn't set out to give a lecture on acoustics, but its there and I'm not deleting it. I do put the bla in blog, remember? There's a lot more (and better) reading you can do at an acoustic expert's site.

For me and my room (and my budget) I started out building two 9" deep 23" wide floor to ceiling traps for the two corners I have access to (The other 2 corners are blocked by the door and my wife's sewing table). These will be stuffed with Roxul Safe and Sound (SnS) which is a lower density mineral wool. Its available on Lowes online, but it was cheaper to find a local supplier to special order it for me.


Roxul compresses it in the packaging nicely

I will build a 6"x23" panel using whatever's left and will place it behind the listening position. I also ordered a bag of the denser Roxul Rockboard 60 (RB60). I'm still waiting for it to come in (rare stuff to find in little Logan UT, but I found a supplier kind enough to order it and let me piggy back on their shipping container so I'm not paying any shipping, thanks Building Specialties!). I will also build four 4"x24"x48" panels out of Roxul Rockboard 60 (when it finally arrives) which is a density that more or less matches the performance of OC703.  These will be hung on the walls at the first reflecting points and ceiling corners. Next year or so when I have some more money I plan to buy a second bag of the rockboard which will hopefully be enough treatment to feel pretty well done. I considered using the 2" RB60 panels individually so I can cover more surface (which is the better thing acoustically), but in the end I want 4" panels and I don't know if it will be feasible to rebuild these later to add thickness.
my stack of flashing

I more or less followed Steven Helm's method with some variations. The stuff he used isn't very available so I bought some 20 gauge 1.5" galvanized L-framing or angle flashing from the same local supply shop who got me . They had 25ga. but I was worried it would be too flimsy, considering even on the rack a lot of it got bent. I just keep envisioning my kids leaning against them or something and putting a big dent on the side. After buying I worried it would be too heavy, but now after the build, I think for my towering 7.5' bass traps, the thicker material was a good choice. For the smaller 2'x4' panels that are going to be hung up, I'm not sure yet.

I chose not to do a wood trap because I thought riveting would be much faster than nailing where I don't have a compressor yet. Unfortunately I didn't forsee how long it can take to drill through 20ga steel. I found after the first trap its much faster to punch a hole with a nail then drill it to the rivet size. Its nice when you have something to push against (a board underneath) but where I was limited on workspace I sometimes had to drill sideways. A set of vice-grip pliers really made that much easier.


Steven's advice about keeping it square is very good, something I didn't do the best at on the first trap, but not too far off either. They key is using the square to keep your snips cutting squarely. Also since my frame is so thick it doesn't bend very tightly, so I found it useful to take some pliers and twist the corner a bit to square it up.
Corner is a bit round

a bit tighter corner now
 Since my traps are taller than as single SnS panel I had to stack them and cut a 6" off the top. A serrated knife works best for cutting this stuff but I didn't have an old one around, so I improvised one from some scrap sheet metal.

 I staggered the seams to try to make a more homogenous material.


With all the interior assembled I think the frames actually look good enough you could keep them on the outside, but my wife preferred the whole thing be wrapped in fabric. I don't care either way.


Before covering though I glued on some kraft paper using spray adhesive. I worked from top to bottom, but some of them got a bit wrinkled.




The paper was a bit wider than the frame, so I cut around the frame and stuffed it behind a bit, so it has a tidier look.





I'd say they look pretty darn good even without fabric!




Anyway, so all that acoustic blabber above boils down to the fact that even following rules of thumb, the best thing to do is measure the room before and after treatment to see what needs to be treated and how well your treatment did. If its good leave it, if its bad you can add more or try to move it around to address where its performing poorly.

So as measuring is important, and I'm kinda a stickler for open source software I will show you today how to do it. The de-facto standard for measurement is the Room Eq Wizard (REW) freeware program. Its free but not libre, so I decided to use what was libre. Full disclosure: I installed REW and tried it but couldn't ever get sound to come out of it, so that helped motivate the switch. I was impressed REW had a linux installer, but I couldn't find any answers on getting sound out. Its java based, not JACK capable, so it couldn't talk to my firewire soundcard. REW is very good, but for the freedom idealists out there we can use Aliki.

The method is the same in both, generate a sweep of sine tones with your speakers, record the room's response with your mic, and do some processing that creates an impulse response for your room. An impulse signal is a broadband signal that contains all frequencies equally for a very very (infinitely short) amount of time. True impulses are difficult to generate so its easier to just send the frequencies one at a time then combine them with some math. I've talked a little about measuring impulse responses before. The program I used back then (qloud) isn't compiling easily for me these days because it hasn't been updated for modern QT libraries and Aliki is more tuned for room measurement vs. loudspeaker measurement.

I am most interested in 2 impulse responses: 1. the room response between my monitors and my ears while mixing, and 2. the room response between my instruments and the mic. Unfortunately I can't take my monitors or my mic out of the measurement because I don't have anything else to generate or record the sine sweeps with. So each measurement will have these parts of my signal chain's frequency response convolved in too, but I think they are flat enough to get an idea and they'll be consistent for before and after treatment comparisons. I don't have a planned position for where I will be recording in this room but the listening position won't be moving so I'm focused on response 1.

The Aliki manual linked above is pretty good. For the most part I'm not going to rehearse it here. You make select a project location, and I found that anywhere but your home directory didn't work. It makes 4 folders in that location to store different audio files: sweep, capture, impulse, and edited files.

We must first make a sweep, so click the sweep button. I'm going from 20Hz to 22000Hz. May as well see the full range, no? A longer sweep can actually reduce the noise of the measurement, so I went a full 15 seconds. This generates an audio file with the sweep in it in the sweep folder. Aliki stores everything as .ald files, basically a wav with a simpler header I think.

Next step: capture. Set up your audio input and output ports, and pick your sweep file for it to play. Use the test to get your levels. I found that even with my preamps cranked the levels were low coming in from my mic. It was night so I didn't want to play it much louder. You can edit the captures if you need. Each capture makes a new file or files in the capture directory.

I did this over several days because I measured before treatment, then with the traps in place before the paper was added and again after the paper was glued on. Use the load function to get your files and it will show them in the main window. Since my levels were low I went ahead and misused the edit functions to add gain to the capture files so they were somewhat near full swing.

Next step is the convolution to remove the sweep and calculate the impulse response. Select the sweep file you used, set the end time to be longer than your sweep was and click apply and it should give you the impulse response. Be aware that if your levels are low like mine were, you'll only get the tiniest blip of waveform near zero. Save that as a new file and then go to edit.

In edit, you'll likely need to adjust the gain, but you can also adjust the length, and in the end you have a lovely impulse response that you can export to a .wav file that you can listen to (though its not much to listen to) or more practically: use in your favorite impulse response like IR or klangfalter.

But we don't want to use this impulse for convolving signals with. We can already get that reverb by just playing an instrument in our room! We want to analyze the impulse response to see if there's improvement or if something still needs to be changed. So this is where I imported the IR wav files into GNU Octave.

I wrote a few scripts to help out, namely: plotIREQ and plotIRwaterfall. They can be found in their git repository. I also made fftdecimate which smooths it out from the raw plotIREQ plot:



to this:

I won't go through the code in too much detail. If you'd like me to, leave a comment and I'll do another post. But look at plotMyIRs.m for useage examples of how I generated these plots.


You can see the big bump from around 150hz to 2khz. And a couple big valleys at 75hz, 90hz, 110hz etc. One thing I decided from looking at these is that the subwoofer should be turned up a bit, since my Blue Sky Exo2's crossover at around 150hz, and everything below that measured rather low.

I was hoping for a smoother result, especially in the low end, but I plan to build more broadband absorbers for the first reflection points. While a 4" thick panel doesn't target the really low end like these bass traps, they do have some effect, even on the very low frequencies. So I hope they'll have a cumulative effect down on that lower part of the graph.


The other point that I'd like to comment on is that the paper didn't seem to make much of a difference. Its possible that since it wasn't factory glued onto the rockwool it lacks a sufficient bond to transfer the energy properly. It doesn't seem to hurt the results too much either, in fact around 90hz it seems like it actually makes the response smoother, so I don't plan to remove it (yet at least).

The last plots I want to look at is the waterfall plots. These show how the frequencies are responding in time so you will see if any frequencies are ringing/resonating and need better treatment.


Here we see some anomolies. Just comparing the first and final plots, its easy to see that nearly every frequency decays much more quickly (we're focused on the lower region 400hz and below, since thats where the rooms primary modes lie). You also see a long resonance somewhere around 110hz that still isn't addressed, which is probably the next target. I can try to move the current traps out from the wall and see if that helps, or make a new panel and try to tune it.

Really though I'm probably going to wait until I've built the next set of panels.
Hope this was informative and useful. Try out those octave scripts. And please comment!

by Spencer (noreply@blogger.com) at June 23, 2016 03:10 PM

June 20, 2016

open-source – cdm createdigitalmusic

A composition you can only hear by moving your head

“It’s almost like there’s an echo of the original music in the space.”

After years of music being centered on stereo space and fixed timelines, sound seems ripe for reimagination as open and relative. Tim Murray-Browne sends us a fascinating idea for how to do that, in a composition in sound that transforms as you change your point of view.

Anamorphic Composition (No. 1) is a work that uses head and eye tracking so that you explore the piece by shifting your gaze and craning your neck. That makes for a different sort of composition – one in which time is erased, and fragments of sound are placed in space.

Here’s a simple intro video:

Anamorphic Composition (No. 1) from Tim Murray-Browne on Vimeo.

I was also unfamiliar with the word “anamorphosis”:

Anamorphosis is a form which appears distorted or jumbled until viewed from a precise angle. Sometimes in the chaos of information arriving at our senses, there can be a similar moment of clarity, a brief glimpse suggestive of a perspective where the pieces align.

Tech details:

The head tracking and most of the 3D is done in Cinder using the Kinect One. This pipes OSC into SuperCollider which does the sounds synthesis. It’s pretty much entirely additive synthesis based around the harmonics of a bell.

I’d love to see experiments with this via acoustically spatialized sound, too (not just virtual tracking). Indeed, this question came up in a discussion we hosted in Berlin in April, as one audience member talked about how his perception of a composition changed as he tilted his head. I had a similar experience taking in the work of Tristan Perich at Sónar Festival this weekend (more on that later).

On the other hand, virtual spaces will present still other possibilities – as well as approaches that would bend the “real.” With the rise of VR experiences in technology, the question of point of view in sound will become as important as point of view in image. So this is the right time to ask this question, surely.

Something is lost on the Internet, so if you’re in London, check out the exhibition in person. It opens on the 27th:

http://timmb.com/anamorphic-composition-no-1/

The post A composition you can only hear by moving your head appeared first on cdm createdigitalmusic.

by Peter Kirn at June 20, 2016 04:30 PM

Libre Music Production - Articles, Tutorials and News

LMP Asks #19: An interview with Vladimir Sadovnikov

LMP Asks #19: An interview with Vladimir Sadovnikov

This month LMP Asks talks to Vladimir Sadovnikov, programmer and sound engineer, about his project, LSP plugins, which aims to bring new, non existing plugins to Linux. As well as the LSP plugin suite, Vladimir has also contributed to other Linux audio projects such as Calf Studio Gear and Hydrogen.

by Conor at June 20, 2016 12:49 PM

June 18, 2016

Libre Music Production - Articles, Tutorials and News

Check out 'Why, Phil?', new Linux audio webshow series

Check out 'Why, Phil?', new Linux audio webshow series

Philip Yassin has recently started an upbeat Linux audio webshow series called 'Ask Phil?'. Only recently started, the series has already notched up an impressive 7 episodes, most of which revolve around Phil's favourite DAW, Qtractor.

by Conor at June 18, 2016 06:45 PM

The "Gang of 3" is loose again

The

The Vee One Suite aka. the gang of three old-school homebrew software instruments, respectively synthv1, as a polyphonic subtractive synthesizer, samplv1, a polyphonic sampler synthesizer and drumkv1 as one another drum-kit sampler, are here released once again, now in their tenth reincarnation.

by yassinphilip at June 18, 2016 03:25 PM

June 16, 2016

rncbc.org

Vee One Suite 0.7.5 - The Tenth beta is out!


Hiya!

The Vee One Suite aka. the gang of three old-school homebrew software instruments, respectively synthv1, as a polyphonic subtractive synthesizer, samplv1, a polyphonic sampler synthesizer and drumkv1 as one another drum-kit sampler, are here released once again, now in their tenth reincarnation.

All available in dual form:

  • a pure stand-alone JACK client with JACK-session, NSM (Non Session management) and both JACK MIDI and ALSA MIDI input support;
  • a LV2 instrument plug-in.

The esoteric change-log goes like this:

  • LV2 Patch property parameters and Worker/Schedule support are now finally in place, allowing for sample file path selections from generic user interfaces (applies to samplv1 and drumkv1 only).
  • All changes to most continuous parameter values are now smoothed to a fast but finite slew rate.
  • All BPM sync options to current transport (Auto) have been refactored to new special minimum value (which is now zero).
  • In compliance to the LV2 spec. MIDI Controllers now affect cached parameter values only, via shadow ports, instead of input control ports directly, mitigating their read-only restriction.
  • Make sure LV2 plug-in state is properly reset on restore.
  • Dropped the --enable-qt5 from configure as found redundant given that's the build default anyway (suggestion by Guido Scholz, while for Qtractor, thanks).

The Vee One Suite are free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

And then again!

synthv1 - an old-school polyphonic synthesizer

synthv1 0.7.5 (tenth official beta) is out!

synthv1 is an old-school all-digital 4-oscillator subtractive polyphonic synthesizer with stereo fx.

LV2 URI: http://synthv1.sourceforge.net/lv2

website:
http://synthv1.sourceforge.net

downloads:
http://sourceforge.net/projects/synthv1/files

git repos:
http://git.code.sf.net/p/synthv1/code
https://github.com/rncbc/synthv1.git
https://gitlab.com/rncbc/synthv1.git
https://bitbucket.org/rncbc/synthv1.git

Flattr this

samplv1 - an old-school polyphonic sampler

samplv1 0.7.5 (tenth official beta) is out!

samplv1 is an old-school polyphonic sampler synthesizer with stereo fx.

LV2 URI: http://samplv1.sourceforge.net/lv2

website:
http://samplv1.sourceforge.net

downloads:
http://sourceforge.net/projects/samplv1/files

git repos:
http://git.code.sf.net/p/samplv1/code
https://github.com/rncbc/samplv1.git
https://gitlab.com/rncbc/samplv1.git
https://bitbucket.org/rncbc/samplv1.git

Flattr this

drumkv1 - an old-school drum-kit sampler

drumkv1 0.7.5 (tenth official beta) is out!

drumkv1 is an old-school drum-kit sampler synthesizer with stereo fx.

LV2 URI: http://drumkv1.sourceforge.net/lv2

website:
http://drumkv1.sourceforge.net

downloads:
http://sourceforge.net/projects/drumkv1/files

git repos:
http://git.code.sf.net/p/drumkv1/code
https://github.com/rncbc/drumkv1.git
https://gitlab.com/rncbc/drumkv1.git
https://bitbucket.org/rncbc/drumkv1.git

Flattr this

Enjoy && have lots of fun ;)

by rncbc at June 16, 2016 05:30 PM

June 15, 2016

Libre Music Production - Articles, Tutorials and News

LMP Asks #18: Andrew Lambert & Neil Cosgrove

LMP Asks #18: Andrew Lambert & Neil Cosgrove

This month we interviewed Andrew Lambert and Neil Cosgrove, members of Lorenz Attraction and developers of LNX_Studio, a cross platform, customizable, networked DAW written in the SuperCollider programming language.  Please see the end of the article for links to LNX_Studio and Lorenz Attraction's music!

by Scott Petersen at June 15, 2016 05:09 PM

June 13, 2016

digital audio hacks – Hackaday

Ball Run Gets Custom Sound Effects

Building a marble run has long been on my project list, but now I’m going to have to revise that plan. In addition to building an interesting track for the orbs to traverse, [Jack Atherton] added custom sound effects triggered by the marble.

I ran into [Jack] at Stanford University’s Center for Computer Research in Music and Acoustics booth at Maker Faire. That’s a mouthful, so they usually go with the acronym CCRMA. In addition to his project there were numerous others on display and all have a brief write-up for your enjoyment.

[Jack] calls his project Leap the Dips which is the same name as the roller coaster the track was modeled after. This is the first I’ve heard of laying out a rolling ball sculpture track by following an amusement park ride, but it makes a lot of sense since the engineering for keeping the ball rolling has already been done. After bending the heavy gauge wire [Jack] secured it in place with lead-free solder and a blowtorch.

As mentioned, the project didn’t stop there. He added four piezo elements which are monitored by an Arduino board. Each is at a particularly extreme dip in the track which makes it easy to detect the marble rolling past. The USB connection to the computer allows the Arduino to trigger a MaxMSP patch to play back the sound effects.

For the demonstration, Faire goers wear headphones while letting the balls roll, but in the video below [Jack] let me plug in directly to the headphone port on his Macbook. It’s a bit weird, since there no background sound of the Faire during this part, but it was the only way I could get a reasonable recording of the audio. I love the effect, and think it would be really fun packaging this as a standalone using the Teensy Audio library and audio adapter hardware.


Filed under: cons, digital audio hacks

by Mike Szczys at June 13, 2016 06:31 PM

Synchronize Data With Audio From A $2 MP3 Player

Many of the hacks featured here are complex feats of ingenuity that you might expect to have emerged from a space-age laboratory rather than a hacker’s bench. Impressive stuff, but on the other side of the coin the essence of a good hack is often just a simple and elegant way of solving a technical problem using clever lateral thinking.

Take this project from [drtune], he needed to synchronize some lighting to an audio stream from an MP3 player and wanted to store his lighting control on the same SD card as his MP3 file. Sadly his serial-controlled MP3 player module would only play audio data from the card and he couldn’t read a data file from it, so there seemed to be no easy way forward.

His solution was simple: realizing that the module has a stereo DAC but a mono amplifier he encoded the data as an audio FSK stream similar to that used by modems back in the day, and applied it to one channel of his stereo MP3 file. He could then play the music from his first channel and digitize the FSK data on the other before applying it to a software modem to retrieve its information.

There was a small snag though, the MP3 player summed both channels before supplying audio to its amplifier. Not a huge problem to overcome, a bit of detective work in the device datasheet allowed him to identify the resistor network doing the mixing and he removed the component for the data channel.

He’s posted full details of the system in the video below the break, complete with waveforms and gratuitous playback of audio FSK data.

This isn’t the first time we’ve featured audio FSK data here at Hackaday. We’ve covered its use to retrieve ROMs from 8-bit computers, seen it appearing as part of TV news helicopter coverage, and even seen an NSA Cray supercomputer used to decode it when used as a Star Trek sound effect.


Filed under: digital audio hacks

by Jenny List at June 13, 2016 03:31 PM

Hackaday Prize Entry: 8-Bit Arduino Audio for Squares

A stock Arduino isn’t really known for its hi-fi audio generating abilities. For “serious” audio like sample playback, people usually add a shield with hardware to do the heavy lifting. Short of that, many projects limit themselves to constant-volume square waves, which is musically uninspiring, but it’s easy.

[Connor]’s volume-control scheme for the Arduino bridges the gap. He starts off with the tone library that makes those boring square waves, and adds dynamic volume control. The difference is easy to hear: in nature almost no sounds start and end instantaneously. Hit a gong and it rings, all the while getting quieter. That’s what [Connor]’s code lets you do with your Arduino and very little extra work on your part.

The code that accompanies the demo video (which is embedded below) is a good place to start playing around. The Gameboy/Mario sound, for instance, is as simple as playing two tones, and making the second one fade out. Nonetheless, it sounds great.

Behind the scenes, it uses Timer 0 at maximum speed to create the “analog” values (via PWM and the analogWrite() command) and Timer 1 to create the audio-rate square waves. That’s it, really, but that’s enough. A lot of beloved classic arcade games didn’t do much more.

While you can do significantly fancier things (like sample playback) with the same hardware, the volume-envelope-square-wave approach is easy to write code for. And if all you want is some simple, robotic-sounding sound effects for your robot, we really like this approach.

The HackadayPrize2016 is Sponsored by:

Filed under: Arduino Hacks, digital audio hacks, The Hackaday Prize

by Elliot Williams at June 13, 2016 05:01 AM

June 10, 2016

open-source – cdm createdigitalmusic

Music thing’s Turing Machine gets a free Blocks version

We already saw some new reasons this week to check out Reaktor 6 and Blocks, the software modular environment. Here’s just one Blocks module that might get you hooked – and it’s free.

“Music thinking Machines,” out of Berlin, have built a software rendition of Music Thing’s awesome Turing Machine Eurorack module (created by Tom Whitwell). As that hardware is open source, and because what you can do in wiring you can also do in software, it was possible to build software creations from the Eurorack schematics.

The beauty of this is, you get the Turing Machine module in a form that lets you instantly control other Reaktor creations – as well as the ability to instantiate as many modules as you want without the aid of a screwdriver or waiting for a DHL delivery to arrive. (Hey, software has some advantages.) I don’t so much see it reducing the appeal of the hardware, either, as it makes me covet the hardware version every time I open up the Reaktor ensemble.

And the module is terrific. In addition to the Turing Machine Mk 2, you get the two Mk 2 expanders, Volts and Pulses.

The Turing Machine Mk 2 is a random looping sequencer – an idea generator that uses shift registers to make melodies and rhythms you can use with other modules. It’s also a fun build. But now, you can use that with the convenience of Reaktor.

Pulses and Voltages expanders add still more unpredictability. Pulses is a random looping clock divider, and Voltages is a random looping step sequencer. I also like the unique front panels made just for the Reaktor version … I wonder if someone will translate that into actual hardware.

The idea is to connect them together: take the 8 P outputs from the Turing Machine and connect them to the 8 P inputs on Pulses (for pulses), and then do the same with the voltage inputs and outputs on Volts. You can also make use, as the example ensemble does, of a Clock and Clock Divider module included by default in Reaktor 6’s Blocks collection.

With controls for probability and sequence length, you can put it all together and have great fun with rhythms and tunes.

Download the Reaktor ensemble:

Turing Machine Mk2 plus Pulses and Volts Expanders [Reaktor User Library]

Here’s what the original modules look like in action:

Find out more:

https://github.com/TomWhitwell/TuringMachine/

Also worth a read (especially now with this latest example of what open source hardware can mean – call it free advertising in software form, not to mention a cool project):
Why open source hardware works for Music Thing Modular

Oh, and if you want to go the opposite direction, Tom also recently wrote a tutorial on writing firmware for the Mutable Clouds module. The old software/hardware line is more blurred than ever, as make software versions of hardware that then interfaces with hardware and back to hardware again and hardware also runs software. (Whew.)

Turing Machine Controls
Prob: Determines the probability of a bit being swapped from 0 to 1 (or viceversa).
All right locks the sequence of bits, all left locks the sequence in a “mobius loop” mode.
Length: Sets the length of the sequence Scale: Scales the range of the pitch output +/-: Writes a 1 or a 0 bit in the shift register AB: Modulation inputs

Pulses Expander Controls
Output: Selects 1 of the 11 gated outputs

Volts Expander Controls
1 till 5: Controls the voltage of active bit

For more detailed information of how the turing machine works please visit the Music Thing
website: https://github.com/TomWhitwell/TuringMachine/

Music Thinking Machines
Berlin

The post Music thing’s Turing Machine gets a free Blocks version appeared first on cdm createdigitalmusic.

by Peter Kirn at June 10, 2016 04:37 PM

Libre Music Production - Articles, Tutorials and News

John Option release debut album, "The cult of John Option"

John Option release debut album,

John Option have just released "The cult of John Option". This is their debut album and it brings together all their singles published in the past few months, including remix versions.

As always, John Option's music is published under the terms of the Creative Commons License (CC-BY-SA) and is produced entirely using free software.

by Conor at June 10, 2016 01:30 PM

June 09, 2016

GStreamer News

GStreamer Core, Plugins, RTSP Server, Editing Services, Python, Validate, VAAPI 1.8.2 stable release

The GStreamer team is pleased to announce the second bugfix release in the stable 1.8 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.8.1. For a full list of bugfixes see Bugzilla.

See /releases/1.8/ for the full release notes.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, or gstreamer-vaapi, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, or gstreamer-vaapi.

June 09, 2016 10:00 AM

June 06, 2016

open-source – cdm createdigitalmusic

Ableton hacks: Push 2 video output and more

For years, the criticism of laptops has been about their displays – blue light on your face and that sense that a performer is checking email. But what if the problem isn’t the display, but the location of the display? Because being able to output video to your hardware, while you turn knobs and hit pads, could prove pretty darned useful.

Push 2 video output

And so that makes this latest hack really cool. 60 fps(!) video can now stream over a USB cable to Ableton’s Push 2 hardware. You’ll need some way of creating that video texture, but that’s there in Max for Live’s Jitter objects.

David Butler’s imp.push object, out last week, makes short work of this.

The ingredients that made this possible:
1. Ableton’s API documentation for Push 2, available now on GitHub thanks to Ableton and a lot of hard work by Ralf Suckow.

2. libusb

Learn more at this blog post:
imp.push Beta Released

Get the latest version (or collaborate) at GitHub

Next up on his to-do list – what to do with those RGB pads.

Here’s an impressive video from Cycling ’74 — ask.audio scooped us on this story last week, hat tip to them.

Thanks to Bjorn Vayner for the tip!

ubermap

Push 2 mappings

And while you’re finding cool stuff to do to expand your Push 2 capabilities, don’t miss this free set of scripts.

Ubermap is a free and open source script for Push 2 designed to let you map VST and AU plug-ins to your Push controller. What’s great about this is that there’s no middle man – nothing like Komplete Kontrol running between you and your plug-in, just direct mapping of parameters. It’s not as powerful or extensive as the Isotonik tool we covered last week, and it’s limited to Push 2 (with some Push 1 support), so you’ll still want to go that route if you fancy using other controller hardware. But the two can be viewed as complementary, particularly as all of this is possible because of Ableton’s API documentation.

You can find the scripts on the Ableton forum:

Ubermap for Push 2 (VST/AU parameter remapping)

There are links there to more documentation and tips on configuration of various plug-ins. Or to grab everything directly, head to GitHub:

http://bit.ly/ubermap-src

Now, let’s hope this paves the way for more native support in future releases of Live, and some sort of interface for doing this in the software without custom scripts. But there’s no reason to wait – these solutions do work now.

Previously:

Ableton just released every last detail of how Push 2 works

You can now access the Push 2 display from Max

Ableton hacks: map anything, even Kontakt and Reaktor

The post Ableton hacks: Push 2 video output and more appeared first on cdm createdigitalmusic.

by Peter Kirn at June 06, 2016 03:28 PM

June 03, 2016

blog4

Embedded Artist Berlin concert 3.6.2016

After the great concert last week in Linz during the Amro festival at Stadtwerkstatt, we play as Embedded Artist tonight in Berlin at Ausland:
http://ausland-berlin.de/embedded-artist-antez-morimoto

by herrsteiner (noreply@blogger.com) at June 03, 2016 12:50 AM

June 01, 2016

Libre Music Production - Articles, Tutorials and News

LMP Asks #17: An interview with Frank Piesik

LMP Asks #17: An interview with Frank Piesik

This month we talked with Frank Piesik, a musician, inventor and educator living in Bremen.

Hi Frank, thanks for talking with us! First, can you tell us a little about yourself?

by Scott Petersen at June 01, 2016 02:05 PM

Contest: Win an amazing MOD Duo!

Contest: Win an amazing MOD Duo!

To commemorate the last batch shipment to kickstarter backers MOD Devices have set up a social media contest to give away a MOD Duo, the hardware stompbox which runs on linux and a whole ecosystem of FLOSS audio plugins.

by Conor at June 01, 2016 10:46 AM

May 31, 2016

ardour

Nightly builds are now for TESTING only

The master development branch of Ardour has recently been merged with two major development branches. These bring major new functionality to Ardour (tempo ramps and VCA masters, among other things), but the result is a new version of Ardour. This version is sufficiently different that it could alter/damage your Ardour configuration files and may not correctly work with existing sessions. We have therefore tagged it "5.0-pre0" so that it will create new configuration folders and not interact with your settings and preferences for older versions of Ardour.

read more

by paul at May 31, 2016 08:40 PM

Linux – cdm createdigitalmusic

iZotope Mobius and the crazy fun of Shepard Tones

I always figure the measure of a good plug-in is, you want to tell everyone about it, but you don’t want to tell everyone about it, because then they’ll know about it. iZotope’s Möbius is in that category for me – it’s essentially moving filter effect. And it’s delicious, delicious candy.

iZotope have been on a bit of a tear lately. The company might be best known for mastering and restoration tools, but in 2016, they’ve had a series of stuff you might build new production ideas around. And I keep going to their folder in my sets. There’s the dynamic delay they built – an effect so good that you’ll overlook the fact that the UI is inexplicably washed out. (I just described it to a friend as looking like your license expired and the plug-in was disabled or something. And yet… I think there’s an instance of it on half the stuff I’ve made since I downloaded it.)

More recently, there was also a a plug-in chock full of classic vocal effects.

iZotope Möbius brings an effect largely used in experimental sound design into prime time.

At its core is a perceptual trick called the “Shepard Tone” (named for a guy named Shepard). Like the visual illusion of stripes on a rotating barber pole, the sonic illusion of the Shepard Tone (or the continuously-gliding Shepard–Risset glissando) is such that you perceive endlessly rising motion.

Here, what you should do for your coworkers / family members / whatever is definitely to turn this on and let them listen to it for ten hours. They’ll thank you later, I’m sure.

The Shepard Tone describes synthesis – just producing the sound. The Möbius Filter applies the technique to a resonant filter, so you can process any existing signal.

Musical marketing logic is such that of course you’re then obligated to tell people they’ll want to use this effect for everything, all the time. EDM! Guitars! Vocals! Whether you play the flugelhorn or are the director of a Bulgarian throat signing ensemble, Möbius Filter adds the motion and excitement every track needs!

And, uh, sorry iZotope, but as a result I find the sound samples on the main page kind of unlistenable. Of course, taste is unpredictable, so have a listen. (I guess actually this isn’t a bad example of a riser for EDM so much as me hating those kinds of risers. But then, I like that ten hours of glissandi above, so you probably shouldn’t listen to me.)

https://www.izotope.com/en/products/create-and-design/mobius-filter/sounds.html

Anyway, I love the sound on percussion. Here’s me messing around with that, demonstrating the ability to change direction, resonance, and speed, with stereo spatialization turned on:

The ability to add sync effects (and hocketing, with triplet or dotted rhythms) for me is especially endearing. And while you’ll tire quickly of extreme effects, you can certainly make Möbius Filter rather subtle, by adjusting the filter and mix level.

Möbius Filter is US$49 for most every Mac and Windows plug-in format. A trial version is available.

screenshot_438

https://www.izotope.com/en/products/create-and-design/mobius-filter.html

It’s worth learning more about the Shepard and Risset techniques in general, though – get ready for a very nice rabbit hole to climb down. Surprisingly, the Wikipedia article is a terrific resource:

Shepard tone

If you want to try coding your own shepard tone synthesis, you can do so in the free and open source, multi-platform environment SuperCollider. In fact, SuperCollider is what powered the dizzying musical performance by Marcus Schmickler CDM co-hosted with CTM Festival last month here in Berlin. Here’s a video tutorial that will guide you through the process (though there are lots of ways to accomplish this).

The technique doesn’t stop in synthesis, though. Just as the same basic perceptual trick can be applied to rising visuals and rising sounds, it can also be used in rhythm and tempo – which sounds every bit as crazy as you imagine. Here’s a description of that, with yet more SuperCollider code and a sound example using breaks. Wow.

Risset rhythm – eternal accelerando

Finally, the 1969 rendition of this technique by composer James Tenney is absolutely stunning. I don’t know how Ann felt about this, but it’s titled “For Ann.” (“JAMES! MY EARS!” Okay, maybe not; maybe Ann was into this stuff. It was 1969, after all.) Thanks to Jos Smolders for the tip.

Good times.

So, between Möbius Filter and SuperCollider, you can pretty much annoy anyone. I’m game.

https://supercollider.github.io

The post iZotope Mobius and the crazy fun of Shepard Tones appeared first on cdm createdigitalmusic.

by Peter Kirn at May 31, 2016 07:11 PM

Scores of Beauty

Music Encoding Conference 2016 (Part 1)

About a year a go I posted a report of my first appearance at the Music Encoding Conference that had taken place in Florence (Italy). I then introduced the idea of interfacing LilyPond with MEI, the de facto standard in (academic) digital music edition and was very grateful to be welcomed warmly by that scholarly community. Over the past year this idea became increasingly concrete, and so I’m glad that German research funds made it possible to present another paper at this year’s conference although Montréal (Canada) isn’t exactly around the corner. In a set of two posts I will talk about my my impressions in general (current post) and my paper and other LilyPond-related aspects (next post).

The MEI (which stands for Music Encoding Initiative, which is both a community and as a format specification) is a quite small and friendly community, although it basically represents the Digital Humanities branch of musicology as a whole. As a consequence it’s nice to see many people again on this yearly convention. There were 67 registered participants from 10 countries with rather strong focus on north America and central Europe (last year in Florence I think we were around 80).

The MEC is a four day event with day two and three being dedicated to actual paper presentations. The first day features workshops while the fourth day is an “unconference day” giving the opportunity for spontaneous or pre-arranged discussion and collaboration. A sub-event that seems to gain relevance each year is the conference banquet – one could even imagine that by now this plays a role when applying for organizing the next MECs 😉 . We had a nice dinner at the Auberge Saint Gabriel with excellent food and wine and an extremely high noise floor that I attribute to the good mood and spirit we all had. And on the last evening we had the chance to attend a lecture recital with Karen Desmond and the VivaVoce ensemble who gave us a commented overview of the history of notation from around 900 to the late 16th century.

Ensemble VivaVoce and Karen Desmond (click to enlarge)

Ensemble VivaVoce and Karen Desmond (click to enlarge)

Verovio Workshop

From the workshops I decided to attend Verovio – current status and future directions, which was partly a presentation of the tool itself and its latest development, but also a short hands-on introductory tutorial (OK, “hands-on” was limited to having the files available to look through and modify the configuration variables). Verovio is currently “the” tool of choice for displaying scores in digital music editions, so it’s obvious that I’m highly interested in learning more about it. Basically it is a library that renders MEI data to scores in SVG files, with a special feature being that the DOM structure of the SVG file matches that of the original MEI, which makes it easy to establish two-way links between source and rendering. Verovio is written in C++ and compiled to a number of target environments/languages. The most prominent one is JavaScript through which Verovio provides real-time engraving in the browser. You should consider having a look at the MEI Viewer demonstration page.

Screenshot from the Verovio website, showing the relation of rendering and source structure (click to enlarge)

Screenshot from the Verovio website, showing the relation of rendering and source structure (click to enlarge)

Verovio’s primary focus is on speed and flexibility, and what can I say? it’s amazing! Once the library and the document have been downloaded the score is rendered and modified near-instantly with a user experience matching ordinary web browsing. It is possible to resize and navigate a score in real-time while with instant reflow. Score items can easily be accessed through JavaScript and may be used to write back any actions to the original source file. And as we’re in the XML domain throughout you can do cool things like rendering remotely hosted scores or extracting parts through XSL transformations and queries. A rather new feature is MIDI playback with highlighting of the played notes. The MIDI player is linked quite tightly into the document, so you can use the scrollbar or click on notes to jump playback with everything being robustly in sync.

Of course this performance comes at a cost: as Verovio is tuned to speed and flexibility its engraving engine is of course rather simplistic. And apart from the fact that it doesn’t support everything yet that a notation program would need it will probably never compete with LilyPond in terms of engraving quality. On the other hand LilyPond will probably never compete with Verovio on it’s native qualities speed and flexibility. This boils down to Verovio and LilyPond rather being perfect complements than competitors. They should be able to happily coexist side by side – within the same editing project or even editing environment. But I’ll get back to that in the other post.

Paper Presentations

Days two and three were filled with paper presentations and posters, and I can hardly give a comprehensive account of everything. Instead I have to pick a few things and make some remarks from a somewhat LilyPond-ish perspective.

Our nice conference hall (presentation by Reiner Krämer).

Our nice conference hall. “Cope events” are somewhat like MIDI wrapped in LISP (click to view full image)

Metadata and Linked Data

Generally speaking the MEI has two independent objectives: music editing and metadata. The original inventor of MEI, Perry Roland, is actually a librarian, and so documenting everything about sources is an inherent goal in the MEI world. Typical projects in that domain might be the cataloguing of a historic library such as the Sources of the Detmold Court Theatre Collection (German only).

But encoding the phyisical sources alone isn’t as good as it gets without considering the power of linking data. There are numerous items in such a house that may refer to each other and provide additional information: bills, copyist’s marks, evening programmes, comments and modifications to individual copies of the music, and much more. Making this kind of information retrievable, possibly across projects, promises new areas of research.

Encoding enhanced data specifying concrete performances of a work is another related area of research. Starting from focusing on secondary information like inscriptions in the performance material existing approaches go all the way to designing systems to encode timing, articulation and dynamics from recorded music as was presented by Axel Berndt. Still far away from analyzing their data directly from the recording it seems a very promising project to provide a solid data foundation to investigate parameters of “musical” performance like for example determining a “rubato fingerprint” for a given pianist. Of course this also works in the other direction, and we heard a MIDI rendering of a string quartet featuring astonishing livelyhood. I’d be particularly interested to see if that technology couldn’t be built upon for notation editors’ playback engines.

Extending the Scope of MEI

An ubiquitous topic on the side actual of music encoding is how to deal with specific repertoire that isn’t covered by Common Western Music Notation. As MEI is so flexible and open it is always possible to create project-specific customizations to include the notation repertoire at hand. But the freedom also implies the risk of becoming too widely split to an amount where it might become meaningless. This is why it is so important to regularly discuss these things in the wider MEI community.

The top targets in this area seem to be neumes and lute (and other) tablature systems, while I didn’t see any attempts towards encoding contemporary or non-western notation styles so far.

Edition Projects

Of course there also were presentations of actual edition projects, of which I’ll mention just a few.

Neuma is a digital library of music scores encoded in MEI (and partially still MusicXML). It features searching by phrases, and the scores can be referenced to be rendered anywhere with Verovio (as described above). They have also been working with LilyPond and would be happy to have this as an additional option for presenting higher quality renderings of their scores and incipits.

Johannes Kepper gave an insightful and also amusing presentation about the walls they ran into with their digital Freischütz edition. This project actually pushed the limits of digital music edition pretty hard and can be used as a reference of approaches and limitations equally. Just imagine that their raw data is about 230 MB worth of XML files – out of which approximately 100 MB count for the encoding of the autograph manuscript alone …

A poster was dedicated to the “genetic edition” of Beethoven’s sketches. This project sets out to encode the genetic process that can be retraced in the manuscript sources giving access to each step of Beethoven’s working process individually.

Salsah is a project at the Digital Humanities Lab at the Basel university. They work on an online presentation of parts of the Anton Webern Gesamtausgabe, namely the sketches (while the “regular” works are intended to be published as a traditional print-only edition). The project is still in the prototype stage, but it has to be said that it is fighting somewhat desparately with its data. The Webern edition is realized using Finale – and the exported MusicXML isn’t exactly suited to semantically make sense of … Well, they would have had the solution at their fingertips, but two and a half years ago I wasn’t able to convince them to switch to LilyPond before publishing the first printed volumes 😉


After these more general observations a second post will go into more detail about LilyPond specific topics, namely MEI’s lack of a professional engraving solution, my own presentation, and nCoda, a new editing system that was presented for the first time at the MEC (incidentally just two days after the flashy and heavily pushed Dorico announcement). I have been in touch with the nCoda developers for over a year now, and it was very nice and fruitful to have a week together in person – but that’s for the next post …

by Urs Liska at May 31, 2016 06:44 AM

May 28, 2016

A touch of music

Modeling rhythms using numbers - part 2

This is a continuation of my previous post on modeling rhythms using numbers.

Euclidean rhythms

The Euclidean Rhythm in music was discovered by Godfried Toussaint in 2004 and is described in a 2005 paper "The Euclidean Algorithm Generates Traditional Musical Rhythms". The greatest common divisor of two numbers is used rhythmically giving the number of beats and silences, generating the majority of important World Music rhythms.

Do it yourself

You can play with a slightly generalized version of euclidean rhythms in your browser  using a p5js based sketch I made to test my understanding of the algorithms involved. If it doesn't work in your preferred browser, retry with google chrome.  

The code

The code may still evolve in the future. There are some possibilities not explored yet (e.g. using ternary number systems instead of binary to drive 3 sounds per circle). You can download the full code for the p5js sketch on github

screenshot of the p5js sketch running. click the image to enlarge

The theory

So what does it do and how does it work? Each wheel contains a number of smaller circles. Each small circle represents a beat. With the length slider you decide how many beats are present on a wheel.  

Some beats are colored dark gray (these can be seen as strong beats), whereas other beats are colored white (weak beats). To strong and weak beats one can assign a different instrument. The target pattern length decides how many weak beats exist between the strong beats. Of course it's not always possible to honor this request: in a cycle with a length of 5 beats and a target pattern length of 3 beats (left wheel in the screenshot) we will have a phrase of 3 beats that conforms to the target pattern length, and a phrase consisting of the 2 remaining beats that make a "best effort" to comply to the target pattern length. 

Technically this is accomplished by running Euclid's algorithm. This algorithm is normally used to calculate the greatest common divisor between two numbers, but here we are mostly interesting in the intermediate results of the algorithm. In Euclid's algorithm, to calculate the greatest common divisor between an integer m and a smaller integer n, the smaller number n is repeatedly subtracted from the greater until the greater is zero or becomes smaller than the smaller, in which case it is called the remainder. This remainder is then repeatedly subtracted from the smaller number to obtain a new remainder. This process is continued until the remainder is zero. When that happens, the corresponding smaller number is the greatest common divisor between the original two numbers n and m.

Let's try it out on the situation of the left wheel in the screenshot. The greater number m is 5 (length) and the smaller number n is 3 (target pattern length). Now the recipe says to repeatedly subtract 3 from 5 until you get something smaller than 3. We can do this exactly once:

5 - (1).3 = 2

We can rewrite this as:

5 = (1).3 + 2

This we can interpret as: the cycle of 5 beats is to be decomposed as 1 phrase with 3 beats, followed by a phrase with 2 beats (the remainder). Each phrase consists of a single strong beat followed by all weak beats. In a symbolic representation easier read by musicians one might write: x..x. (In the notation of the previous part of this article one could also write 10010).

Euclid's algorithm doesn't stop here. Now we have to repeatedly subtract the remainder 2 from the smaller number 3:

3 = (1).2 + 1

This in turn can be read as: the phrase of 3 beats can be further decomposed as 1 phrase of 2 beats followed by a phrase consisting of 1 beat. In a symbolic representation: x.x Euclid continues:

2 = (2).1 + 0

The phrase of two beats can be represented symbolically as: xx. We've reached remainder 0 and Euclid stops: apparently the greatest common divisor between 5 and 3 is 1.

Now it's time to realize what we really did: 
  • We decomposed a phrase of 5 beats in a phrase of 3 beats and a phrase of 2 beats making a rhythm x..x. 
  • Then we further decomposed the phrase of 3 beats into a phrase of 2 beats followed by a phrase of 1 beat. 
  • We can substitute this refined 3 beat phrase in our original rhythm of 5 = 3+2 beats to get a rhythm consisting of 5 = (2 + 1) + 2 beats: x.xx. 
  • I hope it's clear by now that by choosing how long to continue using Euclid's algorithm, we can decide how fine-grained we want our rhythms to become. 
  • This is where the max pattern length slider comes into play. 
The length slider and the target pattern slider will determine a rough division between strong and weak beats by running Euclid's algorithm just once, whereas the max pattern length slider helps you decide how long to carry on Euclid's algorithm to further refine the generated rhythm.


by Stefaan Himpe (noreply@blogger.com) at May 28, 2016 02:22 PM

May 24, 2016

digital audio hacks – Hackaday

Secret Listening to Elevator Music

While we don’t think this qualifies as a “fail”, it’s certainly not a triumph. But that’s what happens when you notice something funny and start to investigate: if you’re lucky, it ends with “Eureka!”, but most of the time it’s just “oh”. Still, it’s good to record the “ohs”.

Gökberk [gkbrk] Yaltıraklı was staying in a hotel long enough that he got bored and started snooping around the network, like you do. Breaking out Wireshark, he noticed a lot of UDP traffic on a nonstandard port, so he thought he’d have a look.

A couple of quick Python scripts later, he had downloaded a number of the sample packets and decoded them into hex and found the signature for LAME, an MP3 encoder. He played around with byte offsets until he got a valid MP3 file out, and voilà, the fantastic reveal! It was the hotel’s elevator music stream — that he could hear outside in the corridor with much less effort. (Sad trombone.)

But just because nothing came up this time doesn’t mean that nothing will come up next time. And it’s important to keep your skills sharp for when you really need them. We love following along with peoples’ reverse engineering efforts, whether or not they end up finding anything. What oddball signals have you found lately?

Thanks [leonardo] for the tip! Wireshark graphic from Softpedia’s entry on Wireshark. Simulated-phosphor audio display by Oona [windytan] Räisänen (check that out!).


Filed under: digital audio hacks, security hacks, slider

by Elliot Williams at May 24, 2016 08:01 AM

May 22, 2016

aubio

Install aubio with pip

You can now install aubio's python module using pip:

$ pip install git+git://git.aubio.org/git/aubio

This should work for Python 2.x and Python 3.x, on Linux, Mac, and Windows. Pypy support is on its way.

May 22, 2016 01:00 PM

May 17, 2016

OSM podcast

May 14, 2016

Libre Music Production - Articles, Tutorials and News

EMAP - a GUI for Fluidsynth

EMAP - a GUI for Fluidsynth

EMAP (Easy Midi Audio Production) is a graphical user interface for the Fluidsynth soundfont synthesizer. It functions as a Jack compatible:

by admin at May 14, 2016 04:12 PM

May 11, 2016

Pid Eins

CfP is now open

The systemd.conf 2016 Call for Participation is Now Open!

We’d like to invite presentation and workshop proposals for systemd.conf 2016!

The conference will consist of three parts:

  • One day of workshops, consisting of in-depth (2-3hr) training and learning-by-doing sessions (Sept. 28th)
  • Two days of regular talks (Sept. 29th-30th)
  • One day of hackfest (Oct. 1st)

We are now accepting submissions for the first three days: proposals for workshops, training sessions and regular talks. In particular, we are looking for sessions including, but not limited to, the following topics:

  • Use Cases: systemd in today’s and tomorrow’s devices and applications
  • systemd and containers, in the cloud and on servers
  • systemd in distributions
  • Embedded systemd and in IoT
  • systemd on the desktop
  • Networking with systemd
  • … and everything else related to systemd

Please submit your proposals by August 1st, 2016. Notification of acceptance will be sent out 1-2 weeks later.

If submitting a workshop proposal please contact the organizers for more details.

To submit a talk, please visit our CfP submission page.

For further information on systemd.conf 2016, please visit our conference web site.

by Lennart Poettering at May 11, 2016 10:00 PM

May 10, 2016

Linux – cdm createdigitalmusic

Trigger effects in Bitwig with MIDI, for free

In the latest chapter of “people on the Internet doing cool things for electronic music,” here’s a creation by Polarity. It lets you rapidly trigger effects parameters via MIDI. And if you’re a Bitwig Studio enthusiast, it’s available for free.

Clever stuff. YouTube has the download link and instructions.

Polarity, based in Berlin, describes himself thusly:

Hi i´m Polarity and do music at home in my small bedroom studio. I record regularly sessions and publish them here. I also broadcast live on twitch from time to time.

Hallo ich heiße Polarity und mache Musik hier in Berlin in meinem kleinen Schlafzimmer. Ich zeichne regelmässig Sessions auf und veröffentliche sie hier. Wer möchte kann das auch live auf Twitch verfolgen, wo ich öfters Live sende!

(Ah, I was wondering when I’d run into someone using Twitch – the live streaming service used largely by gamers – for music.)

More:
Twitch.tv: http://www.twitch.tv/polarity_berlin
Soundcloud: https://soundcloud.com/polarity

It’s an interesting form of promotion – give musicians something they can use. And if that’s where music is headed, maybe that’s not a bad thing. It means the means of making music will spread along with musical ideas, which in the connected online village now worldwide seems a positive.

The post Trigger effects in Bitwig with MIDI, for free appeared first on cdm createdigitalmusic.

by Peter Kirn at May 10, 2016 09:29 PM

May 06, 2016

KXStudio News

Changes in KXStudio repositories

Hey everyone, just a small heads up about the KXStudio repositories.

If you use Debian Testing or the new Ubuntu 16.04 you probably saw some warnings regarding weak SHA1 keys when checking for updates.
We're aware of this issue and a fix is coming soon, but it will require some changes in the repositories.

First, we'll get rid of the 'lucid' builds and rebuild all of them in the 'trusty' series.
For those of you that were using Debian 6 or something older than Ubuntu 14.04, the repositories will stop working for you later this month.

Second, the gcc5 specific packages will be migrated from 'wily' series to 'xenial'.
This means you'll no longer be able to use the KXStudio repositories if you're running Ubuntu 15.10.
If that's the case for you, please update to 16.04 as soon as possible. Note that 15.10 will be officially end-of-life in 2 months.

And finally, the gcc5 packages will begin using Qt5 instead of Qt4 for some applications.
This will include Carla, Qtractor and the v1 series plugins.
Hopefully this won't break anything, but if it does please let us know.

That's it for now. Have a nice weekend!

by falkTX at May 06, 2016 10:00 AM

May 05, 2016

News – Ubuntu Studio

Help Us Put Some Polish on Ubuntu Studio

We are proud to have Ubuntu Studio 16.04 out in the wild. And the next release can and should be better. It WILL be better if you help! Are there specific packages that should be included or removed? Are there features you would like to see? We cannot promise to do everything you ask, but […]

by Set Hallstrom at May 05, 2016 09:58 AM

fundamental code

Lad Discussion Peaks

A History of LAD As Seen Through Heated Discussion

Warning
Summarizing years of discussions is a difficult task. I do not intend to distort the meaning of quotes and if you have a particular quote which you feel is being misrepresented please let me know. This article is designed to review the community as a whole, not impose my opinions onto it. Posts reflect the sentiment of the user at the time of posting and may very likely not reflect the current state of projects or even the authors

What is LAD?

To get a bigger picture of what exactly has lead up to the current state of affairs within LAD I decided it was a good idea to read through some historic [LAD] discussions which made up some of those peaks in activity. This is somewhat more biased towards the flame wars and community rantings, but those discussions should still reveal plenty about the evolution of pain points within the community. First to frame this community analysis let’s look at how the linux audio mailing list officially defines its goal:

Our goal is to encourage widespread code re-use and cooperation, and to provide a common forum for all audio related software projects and an exchange point for a number of other special-interest mailing lists.

This simply shows that the mailing list should be a cooperative place and somewhere where information should be exchanged. Some medium like this is a pretty darn valuable resource and it was recognized as such early on.

The problem is that most Linux audio apps are developed by people who have full-time jobs doing other things. The problems involved in designing audio apps are so great that even those people who are able to work full time on Linux audio are often stumped as to how to implement the desired solutions.
— Mark Knecht October 2002

With varying levels of success there have been some huge discussions about the tradeoffs for different plugin standards, session managers, licenses, knobs (boy do audio devs love talking about knobs), and a variety of other topics. Even with the advantages that something like a community mailing list offers, it’s questionable whether people really consider linux audio developers as a whole a community.

I think the linux audio world is too small and varied to have a tightly knit organisation like the Gnome guys.
— Steve Harris June 2004
If you want to organize something go ahead and organize it, but please don’t tell me that I have to conform to some consumer driven vision of the great commercial future of Linux Audio.
— Jan Depner June 2004
The notion of "the development community" is a misnomer. In fact, what we have are "development communities" (plural).
— Fred Gleason February 2013

Fundamentally, the 'community' is made out of a large variety of independent individuals who need to have a large spread of specialization in order to make effective software. This typically has manifested itself with many different projects with single developers without a great sense of cohesion. This sort of hobbiest development has produced a lot of content, though overall workflow may fall short of users expectations and many projects are subject to bitrot after the small development team moves onto other projects. Everyone has conflicting ideas on how things should work:

Everyone has their point of view. It’s not like you will tell someone "I want to add this feature to your app/api" and will say "Ok". You will simply get an answers like: -No, sorry, I wont accept that patch, i’d rather the library concentrates only on this. -Why dont you do it as a separate library? -Feel free to fork this and add it yourself. -Yeah I recognize it’s useful, but I think it’s out of place, inconsistent with the rest, that I try to keep simple.

— Juan June 2005

Before moving on to the issues presented in this community I want to take a brief detour showing how the linux-audio-dev mailing list and the linux-audio-user mailing list are linked. Within the overall community you frequently have developers who extensively use other LA tools and you have quite a few users who occasionally dabble in details generally reserved for developers. By looking at how many people fall into each one of these categories as a function of how often they post to LAD/LAU we can see that there is an overlap for casual users and a very strong overlap for heavyweight posters.

lml overall cross posters

This overall trend also exists on a much smaller scale. Within any given month there is a significant number of people who have posted on both lists.

lml monthly cross posters

These individuals tend to generate a very significant number of the total posts in any given month as well.

lml monthly cross posts

By acknowledging this relationship, a good number of the problems observed on the LAD list should correspond to issues visible to users as well. In some cases like the 'What sucks about Linux Audio' threads there have been corresponding threads on both lists. In other cases ideas simply flow from one location to another.

Initial Friction

In the past it wasn’t all that unusual for these disagreements to leak onto the mailing lists where they could grow substantially. A good example of this friction would be the Impro-Visor forking effort in 2009. In this thread a fork of an existing project had been created due to GPL licensing issues, but the way the forking was done produced disagreement within the community.

One of the main reasons why R. Stallman started GNU/FSF/GPL because of it’s social aspect. You learn kids on schools for example to corporate and help each other, being social.
— Grammostola Rosea Aug 2009
Forking a project is by it’s nature, and GPL "rights" aside, quite an impact on the author. He or she may have been sweating over their code base for some time, and i don’t think anyone could say they wouldn’t feel a bit awkward if they saw their code being forked, and developed further. Even more so for those who may not have developed their code under the assumption of GPL. From an "outsider’s" point of view, it would seem like a big decision to take both ways, if both parties have any sort of empathy.
— Alex Stone Aug 2009

The individual forking the project could be described as quite aggressive with his approach which did spawn quite the meandering discussion. This thread was one of the first threads in my reading of [LAD] which seemed to significantly put users off and it certainly didn’t help that in June a rather heated flame war on RealtimeKit had already driven away that project’s developer.

I have been following these list serves for a while, but I am just not interested in this kind of drama, and would like to mention for the record that I will no longer be following the lad or lau list serves.
— Justin Smith July 2009
In the last 18 months in LAD we’ve seen some pretty emotive flamewars about Reaper, LV2 in closed source software, LinuxSampler licensing, plugin output negotiation, JACK packaging, JACK and DBUS, PulseAudio, the way qjackctl starts up jackd, RTKit, and probably some other things I’ve forgotten. And this. This isn’t a high traffic list; the flames quite likely outnumber the rest.
— Chris Cannam July 2009
So now is the time to give your positive feedback and constructive critics. Don’t troll and don’t start another flame war unless your goal is to alienate me to stage of me detaching from this community. I will not respond to trolish and flamish mails, feel free to contact me with private mails if you prefer so.
— Nedko Arnaudov November 2009

As these discussions scale out of proportion it’s easy for them to shift from a heated dialog into a flame war. These flame wars often result in huge misunderstandings, a lot of misinformation, tons of angry emails, and importantly wasted time. Wasting time on these mailing lists is a significant offence if they want to retain users and help keep the discussions targeted and helpful to those involved.

When Flamewars Aren’t Stoked

Of course these so called flame wars are not something which is entirely bad for a community to have.

Most of the occasionally 'caustic' folk in this community …​ understand that heated arguments are just a part of how developers find the best solution, and there is no ill will involved. It’s simply a useful tool/process - and arguably, I would say, the most effective way of hammering out good software design the world has seen to date.

Unfortunately there are always a few childish fools who don’t understand this concept (or think it’s a competition and can’t handle the fact that they were wrong) and elevate silly little arguments into long term personal grudges…​ Like trolls, they are best ignored while the rest of us get on with useful things.

What we’re looking for is less completely irrelevant noise like this. Particularly in response to jokes (blatant smileys and all).

— Drobilla July 2009

When you have a heated discussion while keeping it on topic real work can be done, though it is often off-putting to bystanders and those caught in the middle.

When Flamewars Are Stoked

Generally for a lot of these flame wars to take flight there need to be a large variety of people stoking the flames and not directly contributing to the discussion in a meaningful way (though this is not always the case). In most threads this was done by a variety of users mostly ones who weren’t very frequent posters. There was one repeat offender who during July 2010 really caused quite the meltdown within the LAD mailing list, Ralf Mardorf. I originally wasn’t going to mention this, but essentially all flames and off topic communication that july could be traced back to him.

Who is Ralf Mardorf?

I never programmed anything for Linux. I’m not able to do it and I don’t have the time to learn it.

I subscribed to the list, because I needed some information when I tried to program for Linux audio. I guess you want people to learn how to program for Linux audio. What you’re looking for is an attitude test, not a test about programming knowledge. I’ve got knowledge about programming, not about programming for Linux. You don’t like my attitude, but I hope you like other people who have the attitude that you want, even if they don’t have programming knowledge. (This is another issue, but not that one OS might or might not be good, better or what ever, so I guess I should reply :p)

Btw. on user lists a user don’t get some needed information, e.g. actually about what kernel is fine with rtirq and what kernel isn’t fine with it, so it can become impossible to set up an audio Linux, another reason why I’m subscribed to this list.

I’m and other users are responsible for my/their Linux installations, we should use all available sources to get knowledge. Some, me too, do so. In addition now you expect from users that they also should have the same attitude?

— Ralf Mardorf August 2009

And what happened in July?

Well, it started off in a discussion about MIDI jitter. This is something which can be quantified and discussed in terms of the numbers quite easily. Ralf brought the issue up which could imply some interesting bugs, design flaws, or configuration issues. Some simple tests to find the issue were proposed, but the data was never returned to the list resulting in posts such as:

I know very gifted musicians who do like me and they always 'preach' that I should stop using modern computers and I don’t know much averaged people. So the listeners in my flat for sure would be able to hear even failure that I’m unable to hear.
— Ralf Mardorf July 2010

There is no objective valid timing fluctuation. The musical savant next door might be much more sensitive than I’m, regarding to the groove, I don’t know …​ I guess there doesn’t live a musical savant next door, perhaps I’m this savant ;).

Anyway, forget about my assumptions about ms of jitter. I’m fine with the C64, Atari ST and all those stand alone sequencers from the 80ies. I tested did it, but I’m sure I’ll be able to hear hear the difference to my Linux computer …​ not when listening to all MIDI instruments played alone at the same time, but when listening to MIDI instruments + audio tracks.

— Ralf Mardorf July 2010
Sorry for this PS, I try to learn not to write such a high amount of mails :(, but it could be important.
— Ralf Mardorf July 2010

Of course this was pretty frustrating to a number of developers who wanted to solve the problem at hand.

You are comparing a banana and an orange to find out which one is sweeter. Given the nature of the problem it would help a lot to have as little differences between the systems under test, otherwise it’s impossible to track it down.
— Robin Gareus July 2010
We’re getting seriously off-topic here. After all, this is developer list. What happened to the ALSA MIDI Jitter measurements and test-samples?
— Robin Gareus July 2010

This was followed up by numerous off topic threads. Ralf Mardorf ended up accounting for 44 of 463 posts in June and 165 of 653 messages in July. There are frequent replies to himself and if you look at the timestamps from that month there’s even a period where 7 emails are fired off to the list with no responses from anyone else among them. I’m honestly not sure if this is intentional trolling or not, but when a thread named "STEREO RULES" in all caps is created in the midst of the chaos you have to at least suspect it.

The sort of replies which can be seen in this month highlight some of the major issues at play. Developers generally want to know that their software works and that people can use it. They also crucially have very limited time considering that this work is typically done in addition to their other obligations without any return other than the enjoyment of it.

General Thoughts

So, up to this point in history flamewars have been a problem and they have been fueled by a number of individuals who intentionally or otherwise don’t contribute substantially to the original aim of the discussion. Both users and developers for linux audio software seem frustrated with this as it makes it difficult to obtain information, convey accurate information, and interact with other members of the community without wading through a lot of noise. Some of these issues are mirrored in more recent 'heated discussions', but this writeup is long enough, so that will have to wait for a part two.

May 05, 2016 04:00 AM

May 03, 2016

Libre Music Production - Articles, Tutorials and News

Guitarix 0.35 released including much anticipated interface redesign

Guitarix 0.35 released including much anticipated interface redesign

Guitarix has recently seen a new release, version 0.35. As always there are new plugins and bug fixes but the big news with this release is the overhauled interface, compliments of Markus Schmidt. Markus is also responsible for CALF studio gears plugin design, as well as the DSP of many of it's plugins.  

by Conor at May 03, 2016 07:10 PM

April 27, 2016

rncbc.org

Qtractor 0.7.7 - The Haziest Photon is out!

Hi everybody,

On the wrap of the late miniLAC2016@c-base.org Berlin (April 8-10), where this Yet Same Old Qstuff* (continued) workshop babbling of yours truly (slides, videos) was taken place.

There's really one (big) thing to keep in mind, as always: Qtractor is not, never was, meant to be a do-it-all monolith DAW. Quite frankly it isn't a pure modular model either. Maybe we can agree on calling it a hybrid perhaps? And still, all this time, it has been just truthful to its original mission statement--modulo some Qt major version numbers--nb. it started on Qt3 (2005-2007), then Qt4 (2008-2014), it is now Qt5, full throttle.

Now,

It must have been like start saying uh. this is probably the best dot or, if you rather call it that way, beta release of them all!

Qtractor 0.7.7 (haziest photon) is out!

Everybody is here compelled to update.

Leave no excuses behind.

As for the mission statement coined above, you know it's the same as ever was (and it now goes to eleven years in the making):

Qtractor is an audio/MIDI multi-track sequencer application written in C++ with the Qt framework. Target platform is Linux, where the Jack Audio Connection Kit (JACK) for audio and the Advanced Linux Sound Architecture (ALSA) for MIDI are the main infrastructures to evolve as a fairly-featured Linux desktop audio workstation GUI, specially dedicated to the personal home-studio.

Website:

http://qtractor.sourceforge.net

Project page:

http://sourceforge.net/projects/qtractor

Downloads:

http://sourceforge.net/projects/qtractor/files

Git repos:

http://git.code.sf.net/p/qtractor/code
https://github.com/rncbc/qtractor.git
https://gitlab.com/rncbc/qtractor.git
https://bitbucket.org/rncbc/qtractor.git

Change-log:

  • LV2 UI Touch feature/interface support added.
  • MIDI aware plug-ins are now void from multiple or parallel instantiation.
  • MIDI tracks and buses plug-in chains now honor the number of effective audio channels from the assigned audio output bus; dedicated audio output ports will keep default to the stereo two channels.
  • Plug-in rescan option has been added to plug-ins selection dialog (yet another suggestion by Frank Neumann, thanks).
  • Dropped the --enable-qt5 from configure as found redundant given that's the build default anyway (suggestion by Guido Scholz, thanks).
  • Immediate visual sync has been added to main and MIDI clip editor thumb-views (a request by Frank Neumann, thanks).
  • Fixed an old MIDI clip editor contents disappearing bug, which manifested when drawing free-hand (ie. Edit/Select Mode/Edit Draw is on) over and behind its start/beginning position (while in the lower view pane).

Wiki (on going, help wanted!):

http://sourceforge.net/p/qtractor/wiki/

License:

Qtractor is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Flattr this

 

Enjoy && Have fun.

by rncbc at April 27, 2016 06:30 PM

April 23, 2016

digital audio hacks – Hackaday

Color-Changing LED Makes Techno Music

As much as we like addressable LEDs for their obedience, why do we always have to control everything? At least participants of the MusicMaker Hacklab, which was part of the Artefact Festival in February this year, have learned, that sometimes we should just sit down with our electronics and listen.

With the end of the Artefact Festival approaching, they still had this leftover color-changing LED from an otherwise scavenged toy reverb microphone. When powered by a 9 V battery, the LED would start a tiny light show, flashing, fading and mixing the very best out of its three primary colors. Acoustically, however, it spent most of its time in silent dignity.

singing_led_led_anatomy

As you may know, this kind of LED contains a tiny integrated circuit. This IC pulse-width-modulates the current through the light-emitting junctions in preprogrammed patterns, thus creating the colorful light effects.

To give the LED a voice, the participants added a 1 kΩ series resistor to the LED’s “anode”, which effectively translates variations in the current passing through the LED into measurable variations of voltage. This signal could then be fed into a small speaker or a mixing console. The LED expressed its gratitude for the life-changing modification by chanting its very own disco song.

singing_led_hook_up_schematic

This particular IC seems to operate at a switching frequency of about 1.1 kHz and the resulting square wave signal noticeably dominates the mix. However, not everything we hear there may be explained solely by the PWM. There are those rhythmic “thump” noises, shifts in pitch and amplitude of the sound and more to analyze and learn from. Not wanting to spoil your fun of making sense of the beeps and cracks (feel free to spoil as much as you want in the comments!), we just say enjoy the video and thanks to the people of the STUK Belgium for sharing their findings.


Filed under: digital audio hacks, led hacks

by Moritz Walter at April 23, 2016 11:00 AM

April 22, 2016

open-source – cdm createdigitalmusic

Hack – listen to one LED create its own micro rave

Surprise: there’s a little tiny rave hiding inside a flickering LED lamp from a toy. Fortunately, we can bring it out – and you can try this yourself with LED circuitry, or just download our sound to remix.

Surprise Super Fun Disco LED Hack from Darsha Hewitt on Vimeo.

Amine Metani arvid
But let’s back up and tell the story of how this began.

The latest edition of our MusicMakers Hacklab brought us to Leuven, Belgium, and the Artefact Festival held at STUK. Now, with all these things, very often people come up with lofty (here, literally lofty) ideas – and that’s definitely half the fun. (We had one team flying an unmanned drone as a musical instrument.)

But sometimes it’s simple little ideas that steal the show. And so it was with a single LED superstar. Amine Mentani brought some plastic toys with flickering lights, and participant Arvid Jense, along my co-facilitator and all-around artist/inventor/magician Darsha Hewitt decided to make a sound experiment with them. They were joined by participant (and once European Space Agency artist resident) Elvire Flocken-Vitez.

It seems that the same timing used to make that faux flickering light effect generates analog voltages that sound, well, amazing. (See more on this technique in comments from readers below.)

DarshaHewitt_LEDHACK_01-1024x640

You might not get as lucky as we did with animated LEDs you find – or you might find something special, it’s tough to say. But you can certainly try it out yourself, following the instructions here and on a little site Darsha set up (or in the picture here).

And by the popular demand of all our Hacklabbers from Belgium, we’ve also made the sound itself available. So, you can try remixing thing, sampling it, dancing to it, whatever.

screenshot_344

https://freesound.org/people/dardi_2000/sounds/343087/

More:

http://www.darsha.org/artwork/disco-led-hack/

And follow our MusicMakers series on Facebook (or stay tuned here to CDM).

The post Hack – listen to one LED create its own micro rave appeared first on cdm createdigitalmusic.

by Peter Kirn at April 22, 2016 02:07 PM

GStreamer News

GStreamer Core, Plugins, RTSP Server, Editing Services, Validate 1.8.1 stable release (binaries)

Pre-built binary images of the 1.8.1 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

April 22, 2016 12:00 PM

GStreamer Core, Plugins, RTSP Server, Editing Services, Validate 1.6.4 stable release (binaries)

Pre-built binary images of the 1.6.4 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

April 22, 2016 11:00 AM

April 21, 2016

News – Ubuntu Studio

New Ubuntu Studio Release and New Project Lead!

New Project Lead In January 2016 we had an election for a new project lead, and the winner was Set Hallström, who will be taking over the project lead position right after this release. He will be continuing for another two years until the next election in 2018. The team of developers has also seen […]

by Set Hallstrom at April 21, 2016 04:44 PM

April 20, 2016

GStreamer News

GStreamer Core, Plugins, RTSP Server, Editing Services, Python, Validate, VAAPI 1.8.1 stable release

The GStreamer team is pleased to announce the first bugfix release in the stable 1.8 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.8.0. For a full list of bugfixes see Bugzilla.

See /releases/1.8/ for the full release notes.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, or gstreamer-vaapi, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, or gstreamer-vaapi.

April 20, 2016 04:00 PM

OSM podcast

aubio

node-aubio

node.js

node.js logo

Thanks to Gray Leonard, aubio now has its own bindings for node.js.

A fork of Gray's git repo can be found at:

A simple example showing how to extract bpm and pitch from an audio file with node-aubio is included.

To install node-aubio, make sure libaubio is installed on your system, and follow the instructions at npmjs.com.

April 20, 2016 12:28 PM

April 17, 2016

Libre Music Production - Articles, Tutorials and News

New video tutorial describing a complete audio production workflow using Muse and Ardour

New video tutorial describing a complete audio production workflow using Muse and Ardour

Libre Music Production proudly presents Michael Oswalds new 8+ hours video tutorial describing a complete audio production workflow using Muse and Ardour.

In this tutorial you will learn how to import, clean up and edit a MIDI file using MusE. It then goes on to show how to import the MIDI file into Ardour and setting up instruments to play the song. On to guitar recording and audio editing in Ardour, selecting sounds and editing several takes.

The tutorial continues with vocal recording and editing, mixing and mastering the song.

by admin at April 17, 2016 04:19 PM

A complete audio production workflow with Muse and Ardour

Audio production with Muse and Ardour is a 6 part video tutorial showing a complete workflow using FLOSS audio tools.

In this tutorial you will learn how to import, clean up and edit a MIDI file using MusE. It then goes on to show how to import the MIDI file into Ardour and setting up instruments to play the song.

On to guitar recording and audio editing in Ardour, selecting sounds and editing the takes.

The tutorial continues with vocal recording and editing, mixing and mastering the song.

by admin at April 17, 2016 02:51 PM

April 15, 2016

digital audio hacks – Hackaday

Hackaday Dictionary: Ultrasonic Communications

Say you’ve got a neat gadget you are building. You need to send data to it, but you want to keep it simple. You could add a WiFi interface, but that sucks up power. Bluetooth Low Energy uses less power, but it can get complicated, and it’s overkill if you are just looking to send a small amount of data. If your device has a microphone, there is another way that you might not have considered: ultrasonic communications.

clipThe idea of using sound frequencies above the limit of human hearing has a number of advantages. Most devices already have speakers and microphones capable of sending and receiving ultrasonic signals, so there is no need for extra hardware. Ultrasonic frequencies are beyond the range of human hearing, so they won’t usually be audible. They can also be transmitted alongside standard audio, so they won’t interfere with the function of a media device.

A number of gadgets already use this type of communications. The Google Chromecast HDMI dongle can use it, overlaying an ultrasonic signal on the audio output it sends to the TV. It uses this to pair with a guest device by sending a 4-digit code over ultrasound that authorizes it to join an ad-hoc WiFi network and stream content to it. The idea is that, if the device can’t pick up the ultrasound signal, it probably wasn’t invited to the party.

We reported some time ago on an implementation of ultrasonic data using GNU Radio by [Chris]. His writeup goes into a lot of detail on how he set the system up and shows a simple demo using a laptop speaker and microphone. He used Frequency Shift Keying (FSK) to encode the data into the audio, using a base frequency of 23Khz and sending data in five byte packets.

Since then, [Chris] has expanded his system to create a bi-directional system, where two devices communicate bi-directionally using different frequencies. He also changed the modulation scheme to gaussian frequency shift keying for reliability and even added a virtual driver layer on top, so the connection can transfer TCP/IP traffic. Yup, he built an ultrasonic network connection.

His implementation underlines one of the problems with this type of data transmission, though: It is slow. The speed of the data transmission is limited by the ability of the system to transmit and receive the data, and [Chris] found that he needed to keep it slow to work with cheap microphones and speakers. Specifically, he had to keep the number of samples per symbol used by the GFSK modulation high, giving the receiver more time to spot the frequency shift for each symbol in the data stream. That’s probably because the speaker and microphone aren’t specifically designed for this sort of frequency. The system also requires a preamble before each data packet, which adds to the latency of the connection.

So ultrasonic communications may not be fast, but they are harder to intercept than WiFi or other radio frequency signals. Especially if you aren’t looking for them, which inspired hacker [Kate Murphy] to create Quietnet, a simple Python chat system that uses the PyAudio library to send ultrasonic chat messages. For extra security, the system even allows you to change the carrier frequency, which could be useful if the feds are onto you. Whether overt, covert, or just for simple hardware configuration, ultrasonic communications is something to consider playing around with and adding to your bag of hardware tricks.


Filed under: digital audio hacks, Hackaday Columns, wireless hacks

by Richard Baguley at April 15, 2016 05:01 PM

April 14, 2016

GStreamer News

GStreamer 1.6.4 stable release

The GStreamer team is pleased to announce the second bugfix release in the old stable 1.6 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.6.x. For a full list of bugfixes see Bugzilla.

See /releases/1.6/ for the full release notes.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-editing-services, gst-python, or or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-editing-services. gst-python.

April 14, 2016 06:00 PM

Linux – cdm createdigitalmusic

A totally free DAW and live environment, built in SuperCollider: LNX_Studio

Imagine you had a DAW with lots of live tools and synths and effects – a bit like FL Studio or Ableton Live – and it was completely free. (Free as in beer, free as in freedom.) That’s already fairly cool. Now imagine that everything in that environment – every synth, every effect, every pattern maker – was built in SuperCollider, the powerful free coding language for electronic music. And imagine you could add your own stuff, just by coding, and it ran natively. That moves from fairly cool to insanely cool. And it’s what you get with LNX_Studio, a free environment that runs on any OS (Mac now, other builds coming), and that got a major upgrade recently. Let’s have a look.

LNX_Studio is a full-blown synth studio. You can do end-to-end production of entire tracks in it, if you choose. Included:

  • Virtual analog synths, effects, drum machines
  • Step sequencers, piano roll (with MIDI import), outboard gear control
  • Mix engine and architecture
  • Record audio output
  • Automation, presets, and programs (which with quick recall make this a nice idea starter or live setup
  • Chord library, full MIDI output and external equipment integration

It’s best compared to the main view of FL Studio, or the basic rack in Reason, or the devices in Ableton Live, in that the focus is building up songs through patterns and instruments and effects. What you don’t get is audio input, multitracking, or that sort of linear arrangement. Then again, for a lot of electronic music, that’s still appealing – and you could always combine this with something like Ardour (to stay in free software) when it’s time to record tracks.

Also good in this age of external gear lust, all those pattern generators and MIDI control layouts play nice with outboard gear. There’s even an “external device” which you can map to outboard controls.

But all of this you can do in other software. And it’d be wrong to describe LNX_Studio as a free, poor man’s version of that gear, because it can do two things those tools can’t.

First, it’s entirely networked. You can hop onto a local network or the Internet and collaborate with other users. (Theoretically, anyway – I haven’t gotten to try this out yet, but the configuration looks dead simple.)

Second, and this I did play with, you can write your own synths and effects in SuperCollider and run them right in the environment. And unlike environments like Max for Live, that integration is fully native to the tool. You just hop right in, add some code, and go. To existing SuperCollider users, this is finally an integrated environment for running all your creations. To those who aren’t, this might get you hooked.

Here’s a closer look in pictures:

When you first get started, you're presented with a structured environment to add instruments, effects, pattern generators, and so on.

When you first get started, you’re presented with a structured environment to add instruments, effects, pattern generators, and so on.

Fully loaded, the environment resembles portions of FL Studio or Ableton Live. You get a conventional mixer display, and easy access to your tools.

Fully loaded, the environment resembles portions of FL Studio or Ableton Live. You get a conventional mixer display, and easy access to your tools.

Oh, yeah, and out of the box, you get some powerful, nice-sounding virtual analog synths.

Oh, yeah, and out of the box, you get some powerful, nice-sounding virtual analog synths.

But here's the powerful part - inside every synth is SuperCollider code you can easily modify. And you can add your own code using this powerful, object-oriented, free and open source code environment for musicians.

But here’s the powerful part – inside every synth is SuperCollider code you can easily modify. And you can add your own code using this powerful, object-oriented, free and open source code environment for musicians.

Effects can use SuperCollider code, too. There's also a widget library, so adding a graphical user interface is easy.

Effects can use SuperCollider code, too. There’s also a widget library, so adding a graphical user interface is easy.

But whether you're ready to code or not doesn't matter much - there's a lot to play with either way. Sequencers...

But whether you’re ready to code or not doesn’t matter much – there’s a lot to play with either way. Sequencers…

Drum machines...

Drum machines…

More instruments...

More instruments…

You also get chord generators and (here) a piano roll editor.

You also get chord generators and (here) a piano roll editor.

When you're ready to play with others, there's also network capability for jamming in the same room or over a network (or the Internet).

When you’re ready to play with others, there’s also network capability for jamming in the same room or over a network (or the Internet).

Version 2.0 is just out, and adds loads of functionality and polish. Most importantly, you can add your own sound samples, and work with everything inside a mixer environment with automation. Overview of the new features (in case you saw the older version):

Main Studio
Channel style Mixer
Programs (group & sequence Instrument presets)
Automation
Auto fade in/out
Levels dispay
Synchronise channels independently
Sample support in GS Rhythm & SCCode instruments
WebBrowser for importing samples directly from the internet
Local sample support
Sample Cache for off-line use
Bum Note
Now polyphonic
Added Triangle wave & Noise
High Pass filter
2 Sync-able LFO’s
PWM
Melody Maker module (chord progressions, melodies + hocket)
Inport MIDI files
Audio In
Support for External instruments & effects
Interfaces for Moog Sub37, Roland JP-08, Korg Volca series
Many new instruments & effects added to SCCode & SCCodeF

I love what’s happening with Eurorack and hardware modular – and there’s nothing like physical knobs and cables. But that said, for anyone who brags that modular environments are a “clean slate” and open environment, I think they’d do well to look at this, too. The ability to code weird new instruments and effects to me is also a way to find originality. And since not everyone can budget for buying hardware, you can run this right now, on any computer you already own, for free. I think that’s wonderful, because it means all you need is your brain and some creativity. And that’s a great thing.

Give the software a try:

http://lnxstudio.sourceforge.net

And congrats to Neil Cosgrove for his work on this – let’s send some love and support his way.

The post A totally free DAW and live environment, built in SuperCollider: LNX_Studio appeared first on cdm createdigitalmusic.

by Peter Kirn at April 14, 2016 05:05 PM

blog4

Tina Mariane Krogh Madsen: Body Interfaces: A Processual Scripting

TMS member Tina Mariane Krogh Madsen going to show a week of a durational performative installation with guests, in Berlin at Galerie Grüntaler 9 (at Grüntaler Strasse 9 as the name suggests) from 15. - 22. April:

Body Interfaces: A Processual Scripting is a performative installation generated by Tina Mariane Krogh Madsen over the duration of one week. It wishes to raise questions regarding the role of documentation in artistic research, its status and how it can feed into other processes.
In the spatial frames of Grüntaler9 the artist will be intensively working with and redeveloping her own concept of an archive and resources based on the documents and remains from previous performances and interventions, which will additionally be resulting in other performance structures.
The installation is in an ongoing process that can be witnessed everyday from 2-8pm. On selected days there will be guests invited to discuss and perform with the artist in the space.
::::::::: Tina Mariane Krogh Madsen’s research works with the body and (as) materiality via combining understandings of it that are derived from site-specific performance art and from working with technology.
A crucial part of this research takes the form of interventions and performances, collectively titled Body Interfaces, first generated during a residency in Iceland (May, 2015) and since then developed and performed in various contexts, constantly challenging their own format and method. These practices deal with the body as interface for experience and communication in relation to other materialities as well as the environment that surrounds and interacts with these. The interface is here read as a transmitting entity and agency between the body and the surrounding surfaces. An important part of Body Interfaces is its own documentation, in various formats, shapes and scripted entities.
The processual installation is open daily from 14:00 until 20:00 and can be witnessed at all time. The processual scripting has a dynamic approach to the space and therefore the installation will arrive and evolve throughout the days, nothing has been in installed in advance – all is part of the process.
The research topic will be shared through performances and interventions as well as an ongoing reel of performance documentation.
Friday April 15: inauguration and installation:
- 14:00h - 19:00h: performative installation (working session)
- 20:00h - 20:30: Body Interfaces Performance
- from 21:00: Fridäy Süpperclüb (food and drinks by donation)
Saturday April 16: sound (research collaborator: Malte Steiner):
- 14:00h - 19:00h: performative installation (working session)
- 19:00h: sound performance
Sunday April 17: body and site (research collaborator: Nathalie Fari):
- 14:00 - 17:00: performative installation (working session)
- 17:00 - 20:00: performance interventions with Nathalie Fari
Monday April 18: archiving as practice / restructuring and re-contextualizing materials (research collaborator: Joel Verwimp):
14:00h - 20:00h: performative installation with performance interventions (working session)
Tuesday April 19: chance as method – invigorating performative structures:
- 14:00h - 20:00h: performative installation with performance interventions (working session)
Wednesday April 20: instruction: re-performance / transformation I (research collaborator: Aleks Slota):
- 14:00h - 20:00h: performative installation with performance interventions (working session)
Thursday April 21: ritual(s)
- 14:00h - 20:00h: performative installation with performance interventions (working session)
Friday April 22: instruction: re-performance / transformation II (research collaborator: Ilya Noé):
- 14:00h - 18:00h: performative installation with performance interventions (working session)
- 20:00h: Body Interfaces Processual Scripting Resume
- from 21:00: Fridäy Süpperclüb (food and drinks by donation)





::::::::: Performative schedule for April 15. to 22.:

by herrsteiner (noreply@blogger.com) at April 14, 2016 03:32 PM

April 11, 2016

OpenAV

Fabla2 @ miniLAC video!

In an amazingly short time, the streaming videos of miniLAC are online!! OpenAV’s Fabla2 video linked here, for other streaming links, checkout https://media.ccc.de/v/minilac16-openav. Huge thanks to the Stream-Team for their amazing work! Read more →

by harry at April 11, 2016 09:11 AM

April 06, 2016

Libre Music Production - Articles, Tutorials and News

The Qstuff* Spring'16 Release Frenzy

The Qstuff* Spring'16 Release Frenzy

On the wake of the miniLAC2016@c-base.org Berlin, and keeping up with tradition, the most venerable of the Qstuff* are under so called Spring'16 release frenzy.

Enjoy the party!

by yassinphilip at April 06, 2016 05:01 PM

April 05, 2016

OpenAV

miniLAC 2016!

miniLAC 2016!

Hey, its miniLAC this weekend! Are you near Berlin? You should attend, latest and greatest LinuxAudio demos, software, and meet the community! Checkout the schedule here, OpenAV is running a workshop on Fabla2 – showcasing the advanced features of Fabla2, making it suitable for live-performance, studio grade drums, and lots of fun with the new hardware integration for the Maschine… Read more →

by harry at April 05, 2016 07:35 PM

rncbc.org

Qtractor 0.7.6 - A Hazier Photon is released!


Hey, Spring'16 release frenzy isn't over as of just yet ;)

Keeping up with the tradition,

Qtractor 0.7.6 (a hazier photon) is released!

Qtractor is an audio/MIDI multi-track sequencer application written in C++ with the Qt framework. Target platform is Linux, where the Jack Audio Connection Kit (JACK) for audio and the Advanced Linux Sound Architecture (ALSA) for MIDI are the main infrastructures to evolve as a fairly-featured Linux desktop audio workstation GUI, specially dedicated to the personal home-studio.

Flattr this

Website:

http://qtractor.sourceforge.net

Project page:

http://sourceforge.net/projects/qtractor

Downloads:

http://sourceforge.net/projects/qtractor/files

Git repos:

http://git.code.sf.net/p/qtractor/code
https://github.com/rncbc/qtractor

Change-log:

  • Plug-ins search path and out-of-process (aka. dummy) VST plug-in inventory scanning has been heavily refactored.
  • Fixed and optimized all dummy processing for plugins with more audio inputs and/or outputs than channels on a track or bus where it's inserted.
  • Fixed relative/absolute path mapping when saving/loading custom LV2 Plug-in State Presets.

Wiki (on going, help wanted!):

http://sourceforge.net/p/qtractor/wiki/

License:

Qtractor is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

 

Enjoy && Keep the fun, always.

by rncbc at April 05, 2016 06:30 PM

The Qstuff* Spring'16 Release Frenzy

On the wake of the miniLAC2016@c-base.org Berlin, and keeping up with tradition, the most venerable of the Qstuff* are under so called Spring'16 release frenzy.

Enjoy the party!

Details are as follows...

 

QjackCtl - JACK Audio Connection Kit Qt GUI Interface

QjackCtl 0.4.2 (spring'16) released!

QjackCtl is a(n ageing but still) simple Qt application to control the JACK sound server, for the Linux Audio infrastructure.

Website:
http://qjackctl.sourceforge.net
Downloads:
http://sourceforge.net/projects/qjackctl/files

Git repos:

http://git.code.sf.net/p/qjackctl/code
https://github.com/rncbc/qjackctl

Change-log:

  • Added a brand new "Enable JACK D-BUS interface" option, split from the old common "Enable D-BUS interface" setup option which now refers to its own self D-BUS interface exclusively.
  • Dropped old "Start minimized to system tray" option from setup.
  • Add double-click action (toggle start/stop) to systray (a pull request by Joel Moberg, thanks).
  • Added application keywords to freedesktop.org's AppData.
  • System-tray icon context menu has been fixed/hacked to show up again on Plasma 5 (aka. KDE5) notification status area.
  • Switched column entries in the unified interface device combo-box to make it work for macosx/coreaudio again.
  • Blind fix to a FTBFS on macosx/coreaudio platforms, a leftover from the unified interface device selection combo-box inception, almost two years ago.
  • Prevent x11extras module from use on non-X11/Unix plaforms.
  • Late French (fr) translation update (by Olivier Humbert, thanks).

License:

QjackCtl is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Flattr this

 

Qsynth - A fluidsynth Qt GUI Interface

Qsynth 0.4.1 (spring'16) released!

Qsynth is a FluidSynth GUI front-end application written in C++ around the Qt framework using Qt Designer.

Website:
http://qsynth.sourceforge.net
Downloads:
http://sourceforge.net/projects/qsynth/files

Git repos:

http://git.code.sf.net/p/qsynth/code
https://github.com/rncbc/qsynth

Change-log:

  • Dropped old "Start minimized to system tray" option from setup.
  • CMake script lists update (patch by Orcan Ogetbil, thanks).
  • Added application keywords to freedesktop.org's AppData.
  • System-tray icon context menu has been fixed/hacked to show up again on Plasma 5 (aka. KDE5) notifications status area.
  • Prevent x11extras module from use on non-X11/Unix plaforms.
  • Messages standard output capture has been improved in both ways a non-blocking pipe may get.
  • Regression fix for invalid system-tray icon dimensions reported by some desktop environment frameworks.

License:

Qsynth is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Flattr this

 

Qsampler - A LinuxSampler Qt GUI Interface

Qsampler 0.4.0 (spring'16) released!

Qsampler is a LinuxSampler GUI front-end application written in C++ around the Qt framework using Qt Designer.

Website:
http://qsampler.sourceforge.net
Downloads:
http://sourceforge.net/projects/qsampler/files

Git repos:

http://git.code.sf.net/p/qsampler/code
https://github.com/rncbc/qsampler

Change-log:

  • Added application keywords to freedesktop.org's AppData.
  • Prevent x11extras module from use on non-X11/Unix plaforms.
  • Messages standard output capture has been improved again, now in both ways a non-blocking pipe may get.
  • Single/unique application instance control adapted to Qt5/X11.

License:

Qsampler is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Flattr this

 

QXGEdit - A Qt XG Editor

QXGEdit 0.4.0 (spring'16) released!

QXGEdit is a live XG instrument editor, specialized on editing MIDI System Exclusive files (.syx) for the Yamaha DB50XG and thus probably a baseline for many other XG devices.

Website:
http://qxgedit.sourceforge.net
Downloads:
http://sourceforge.net/projects/qxgedit/files

Git repos:

http://git.code.sf.net/p/qxgedit/code
https://github.com/rncbc/qxgedit

Change-log:

  • Prevent x11extras module from use on non-X11/Unix plaforms.
  • French (fr) translations update (by Olivier Humbert, thanks).
  • Fixed port on MIDI 14-bit controllers input caching.

License:

QXGEdit is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Flattr this

 

QmidiCtl - A MIDI Remote Controller via UDP/IP Multicast

QmidiCtl 0.4.0 (spring'16) released!

QmidiCtl is a MIDI remote controller application that sends MIDI data over the network, using UDP/IP multicast. Inspired by multimidicast (http://llg.cubic.org/tools) and designed to be compatible with ipMIDI for Windows (http://nerds.de). QmidiCtl has been primarily designed for the Maemo enabled handheld devices, namely the Nokia N900 and also being promoted to the Maemo Package repositories. Nevertheless, QmidiCtl may still be found effective as a regular desktop application as well.

Website:
http://qmidictl.sourceforge.net
Downloads:
http://sourceforge.net/projects/qmidictl/files

Git repos:

http://git.code.sf.net/p/qmidictl/code
https://github.com/rncbc/qmidictl

Change-log:

  • Added application keywords to freedesktop.org's AppData.

License:

QmidiCtl is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Flattr this

 

QmidiNet - A MIDI Network Gateway via UDP/IP Multicast

QmidiNet 0.4.0 (spring'16) released!

QmidiNet is a MIDI network gateway application that sends and receives MIDI data (ALSA-MIDI and JACK-MIDI) over the network, using UDP/IP multicast. Inspired by multimidicast and designed to be compatible with ipMIDI for Windows.

Website:
http://qmidinet.sourceforge.net
Downloads:
http://sourceforge.net/projects/qmidinet/files

Git repos:

http://git.code.sf.net/p/qmidinet/code
https://github.com/rncbc/qmidinet

Change-log:

  • Allegedly fixed for the socketopt(IP_MULTICAST_LOOP) reverse semantics on Windows platforms (as suggested by Paul Davis, from Ardour ipMIDI implementation, thanks).
  • Added application keywords to freedesktop.org's AppData.

License:

QmidiNet is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Flattr this

 

Enjoy && keep the fun, always!

by rncbc at April 05, 2016 05:30 PM

April 03, 2016

Pid Eins

Announcing systemd.conf 2016

Announcing systemd.conf 2016

We are happy to announce the 2016 installment of systemd.conf, the conference of the systemd project!

After our successful first conference 2015 we’d like to repeat the event in 2016 for the second time. The conference will take place on September 28th until October 1st, 2016 at betahaus in Berlin, Germany. The event is a few days before LinuxCon Europe, which also is located in Berlin this year. This year, the conference will consist of two days of presentations, a one-day hackfest and one day of hands-on training sessions.

The website is online now, please visit https://conf.systemd.io/.

Tickets at early-bird prices are available already. Purchase them at https://ti.to/systemdconf/systemdconf-2016.

The Call for Presentations will open soon, we are looking forward to your submissions! A separate announcement will be published as soon as the CfP is open.

systemd.conf 2016 is a organized jointly by the systemd community and kinvolk.io.

We are looking for sponsors! We’ve got early commitments from some of last year’s sponsors: Collabora, Pengutronix & Red Hat. Please see the web site for details about how your company may become a sponsor, too.

If you have any questions, please contact us at info@systemd.io.

by Lennart Poettering at April 03, 2016 10:00 PM

Midichlorians in the blood

Taking Back From Android



Android is an operating system developed by Google around the Linux kernel. It is not like any other Linux distribution, because not only many common subsystems have been replaced by other components, but also the user interface is radically different based on Java language running into a virtual machine called Dalvik.

An example of subsystem removed from the Linux kernel is the ALSA Sequencer, which is a key piece for MIDI input/output with routing and scheduling that makes Linux comparable in capabilities to Mac OSX for musical applications (for musicians, not whistlers) and years ahead of Microsoft Windows in terms of infrastructure. Android did not offer anything comparable until Android 6 (Marshmallow).

Another subsystem from userspace Linux not included in Android is PulseAudio. Instead, OpenSL ES that can be found on Android for digital audio output and input.

But Android also has some shining components. One of them is Sonivox EAS (originally created by Sonic Network, Inc.) released under the Apache 2 license, and the MIDI Synthesizer used by my VMPK for Android application to produce noise. Funnily enough, it provided some legal fuel to Oracle in its battle against Google, because of some Java binding sources that were included in the AOSP repositories. It is not particularly outstanding in terms of audio quality, but has the ability of providing real time wavetable GM synthesis without using external soundfont files, and consumes very little resources so it may be indicated for Linux projects on small embedded devices. Let's take it to Linux, then!

So the plan is: for the next Drumstick release, there will be a Drumstick-RT backend using Sonivox EAS. The audio output part is yet undecided, but for Linux will probably be PulseAudio. In the same spirit, for Mac OSX there will be a backend leveraging the internal Apple DLS synth. These backends will be available in addition to the current FluidSynth one, which provides very good quality, but uses expensive floating point DSP calculations and requires external soundfont files.

Meanwhile, I've published on GitHub this repository including a port of Sonivox EAS for Linux with ALSA Sequencer MIDI input and PulseAudio output. It also  depends on Qt5 and Drumstick. Enjoy!

Sonivox EAS for Linux and Qt:
https://github.com/pedrolcl/Linux-SonivoxEas

Related Android project:
https://github.com/pedrolcl/android/tree/master/NativeGMSynth

by Pedro Lopez-Cabanillas (noreply@blogger.com) at April 03, 2016 04:59 PM

March 31, 2016

digital audio hacks – Hackaday

The ATtiny MIDI Plug Synth

MIDI was created over thirty years ago to connect electronic instruments, synths, sequencers, and computers together. Of course, this means MIDI was meant to be used with computers that are now thirty years old, and now even the tiniest microcontrollers have enough processing power to take a MIDI signal and create digital audio. [mitxela]’s polyphonic synth for the ATtiny 2313 does just that, using only two kilobytes of Flash and fitting inside a MIDI jack.

Putting a MIDI synth into a MIDI plug is something we’ve seen a few times before. In fact, [mitxela] did the same thing a few months ago with an ATtiny85, and [Jan Ostman]’s DSP-G1 does the same thing with a tiny ARM chip. Building one of these with an ATtiny2313 is really pushing the envelope, though. With only 2 kB of Flash memory and 128 bytes of RAM, there’s not a lot of space in this chip. Making a polyphonic synth plug is even harder.

The circuit for [mitxela]’s chip is extremely simple, with power and MIDI data provided by a MIDI keyboard, a 20 MHz crystal, and audio output provided eight digital pins summed with a bunch of resistors. Yes, this is only a square wave synth, and the polyphony is limited to eight channels. It works, as the video below spells out.

Is it a good synth? No, not really. By [mitxela]’s own assertion, it’s not a practical solution to anything, the dead bug construction takes an hour to put together, and the synth itself is limited to square waves with some ugly quantization, at that. It is a neat exercise in developing unique audio devices and especially hackey, making it a very cool build. And it doesn’t sound half bad.


Filed under: ATtiny Hacks, digital audio hacks, musical hacks

by Brian Benchoff at March 31, 2016 05:00 AM

blog4

March 29, 2016

GStreamer News

GStreamer Core, Plugins, RTSP Server, Editing Services, Validate 1.8.0 stable release (binaries)

Pre-built binary images of the 1.8.0 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

March 29, 2016 10:00 AM

March 27, 2016

Libre Music Production - Articles, Tutorials and News

Petigor's Tale used Audacity for sound recording

Petigor's Tale used Audacity for sound recording

When the authors of Petigor's Tale, a game developed using Blend4Web, wanted to record and edit sound effects for their upcoming game, their choice fell on Audacity.

Read their detailed blog entry about how the editing and recording was made.

by admin at March 27, 2016 08:38 PM

March 26, 2016

Libre Music Production - Articles, Tutorials and News

DrumGizmo version 0.9.9

DrumGizmo version 0.9.9 is just out!

Highlighted changes / fixes:
 - Switch to LGPLv3
 - Linux VST
 - Embedded UI
 - Prepped for diskstreaming (but not yet implemented in UI)
 - Loads of bug fixes

Read the ChangeLog file for the full list of changes

Project Page
http://www.drumgizmo.org

by yassinphilip at March 26, 2016 06:20 PM

March 24, 2016

GStreamer News

GStreamer Core, Plugins, RTSP Server, Editing Services, Python, Validate, VAAPI 1.8.0 stable release

The GStreamer team is proud to announce a new major feature release in the stable 1.x API series of your favourite cross-platform multimedia framework!

This release has been in the works for half a year and is packed with new features, bug fixes and other improvements.

See /releases/1.8/ for the full list of changes.

Binaries for Android, iOS, Mac OS X and Windows will be provided shortly after the source release by the GStreamer project during the stable 1.8 release series.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, or gstreamer-vaapi, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, or gstreamer-vaapi.

March 24, 2016 10:00 AM

Libre Music Production - Articles, Tutorials and News

AV Linux 2016: The Release

AV Linux 2016: The Release

With this release, Glen is moving away from the 'everything but the kitchen sink' approach and instead is focusing on providing a very stable base suitable for low latency audio production.

by yassinphilip at March 24, 2016 05:43 AM

March 23, 2016

Libre Music Production - Articles, Tutorials and News

Ardour 4.7 released

Ardour 4.7 released

Ardour 4.7 is now available, including a variety of improvements and minor bug fixes. The two most significant changes are:

by yassinphilip at March 23, 2016 11:02 AM

Linux Audio Users & Musicians Video Blog

Come Around – Evergreen

This is a music video of a song recorded/mixed/mastered using Linux
(AV Linux 2016) with Harrison Mixbus 3.1 along with some Calf and linuxDSP
Plugins. This is also the first production from our new ‘Bandshed’ studio
and will be released as part of a full EP in a month or so. The band
‘Evergreen’ is the band my son drums in and ‘Come Around’ is an original
song written by the singer.



by DJ Kotau at March 23, 2016 07:04 AM

March 22, 2016

Libre Music Production - Articles, Tutorials and News

Qtractor 0.7.5 (hazy photon) is out!

Qtractor, the audio/MIDI multi-track sequencer, has reached the 0.7.5 milestone!!


Highlights for this dot/beta release:

by yassinphilip at March 22, 2016 03:03 AM

March 21, 2016

Libre Music Production - Articles, Tutorials and News

Building SuperCollider 3.7.0 from Source (Debian)

Building SuperCollider 3.7.0 from Source (Debian)

A few months ago we published an introduction to the audio programming language SuperCollider here on LMP.  With the recent announcement that SuperCollider had reached 3.7.0, we Debian Linux users suddenly find ourselves behind-the-times regarding our SuperCollider packages which are likely to be at 3.6.6 for some time.  If you want 3.7.0 now (or any bleeding edge version in the future) you have no choice but to build it from source.

by Scott Petersen at March 21, 2016 08:53 PM

rncbc.org

Qtractor 0.7.5 - The Hazy Photon is out!

Hello everybody!

Qtractor 0.7.5 (hazy photon) is out!

It comes with one top recommendation though: please update, at once, while it's hot! :)

Highlights for this dot/beta release:

  • Overlapping clips cross-fade (NEW)
  • MIDI Send/Return and Aux-Send insert plugins (NEW)
  • Generic and custom track icons eye-candy (NEW)

Some other interesting points may be found in the blunt and misty change-log below.

And just in case you missed it before,

Qtractor is an audio/MIDI multi-track sequencer application written in C++ with the Qt framework. Target platform is Linux, where the Jack Audio Connection Kit (JACK) for audio and the Advanced Linux Sound Architecture (ALSA) for MIDI are the main infrastructures to evolve as a fairly-featured Linux desktop audio workstation GUI, specially dedicated to the personal home-studio.

Change-log:

  • Beat unit divisor, aka. the denominator or lower numeral in the time-signature, have now a visible and practical effect over the time-line, even though the standard MIDI tempo(BPM) is always denoted in beats as quarter-notes (1/4, crotchet, seminima) per minute.
  • Fixed an old hack on LV2 State Files abstract/relative file-path mapping when saving custom LV2 Presets (after a related issue on Fabla2, by Harry Van Haaren, thanks).
  • Default PC-Keyboard shortcuts may now be erasable and re-assigned (cf. Help/Shortcuts...).
  • New option on the audio/MIDI export dialog, on whether to add/import the exported result as brand new track(s).
  • Introducing brand new track icons property
  • Old Dry/Wet Insert and Aux-send pseudo-plugin parameters are now split into separate Dry and Wet controls, what else could it possibly be? :)
  • Brand new MIDI Insert and Aux-Send pseudo-plugins are now implemented with very similar semantics as the respective and existing audio counterparts.
  • Implement LV2_STATE__loadDefaultState feature (after pull request by Hanspeter Portner aka. ventosus, thanks).
  • Plug-ins search paths internal logic has been refactored; an alternative file-name based search is now in effect for LADSPA, DSSI and VST plug-ins, whenever not found on their original file-path locations saved in a previous session.
  • Finally added this brand new menu Clip/Cross Fade command, aimed on setting fade-in/out ranges properly, just as far to (auto)cross-fade consecutive overlapping clips.

Enjoy && Keep the fun, always.

Flattr this

Website:

http://qtractor.sourceforge.net

Project page:

http://sourceforge.net/projects/qtractor

Downloads:

http://sourceforge.net/projects/qtractor/files

Git repos:

http://git.code.sf.net/p/qtractor/code
https://github.com/rncbc/qtractor

Wiki (on going, help wanted!):

http://sourceforge.net/p/qtractor/wiki/

License:

Qtractor is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Enjoy && Keep the fun, always.

by rncbc at March 21, 2016 08:00 PM

Libre Music Production - Articles, Tutorials and News

SuperCollider 3.7.0 Released

SuperCollider 3.7.0, over two years in the making, has finally been released!  Additions and fixes include (from the News in 3.7 help file):

by Scott Petersen at March 21, 2016 07:33 PM

March 20, 2016

OpenAV

New Web Host

New Web Host

Hey! Have you noticed OpenAV looks a little fresh again? Yep, we’ve moved to a sparkly new server. Why? Mostly due to a bug in the older server (backstory) which was extremely hard to fix. We’ve migrated to another, and things should be all rosy from now on. Any downtime, please report to our webmaster directly – harryhaaren@gmail.com. Now onwards… Read more →

by harry at March 20, 2016 09:56 PM

March 19, 2016

digital audio hacks – Hackaday

Tombstone Brings New Life to Board

Making revisions to existing PCBs with surface mount components often leads to creative solutions, and this insertion of a switch over a tombstoned resistor is no exception. According to [kubatyszko], “this is an FPGA-based Amiga clone. R15 serves as joint-stereo mixing signal between channels to make it easier on headphone users (Amiga has 4 channels, 2 left and 2 right). Removing R15 makes the stereo 100% ‘original’ with fully independent channels. Didn’t want to make it permanent so I decided to put a switch.”

Whether [kubatyszko] intends it or not, this solution is not going to be permanent without some additional work to mechanically secure the switch. We’ve tried this sort of thing before and it sometimes results in the contact area of the resistor being ripped off the substrate and separated from the rest of the resistor, rendering it useless. However, the creative use of the pads to get some additional functionality out of the board deserves some kudos.

We love creative fixes for board problems but it’s been a really long time since we’ve seen several of them collected in one place. We’d love to hear your favorite tricks so let us know in the comments below.


Filed under: digital audio hacks, misc hacks

by Bob Baddeley at March 19, 2016 11:01 AM

March 18, 2016

Libre Music Production - Articles, Tutorials and News

New Guitarix preset from Sebastian Posch

Sebastian Posch, Guitarix user extra-ordinaire, has released some new videos.

First out is his "Dope: Stoner/Doom Metal Preset for Guitarix":

And while you are at it, why not check out his guitar lesson about string skipping:

by admin at March 18, 2016 07:14 PM

March 16, 2016

Talk Unafraid

The Investigatory Powers Bill for architects and administrators

OK, it’s not the end of the world. But it does change things radically, should it pass third reading in its current form. There is, right now, an opportunity to effect some change to the bill in committee stage, and I urge you to read it and the excellent briefings from Liberty and the Open Rights Group and others and to write to your MP.

Anyway. What does this change in our threat models and security assessments? What aspects of security validation and testing do we need to take more seriously? I’m writing this from my perspective, which is from a small ISP systems perspective, but this contains my personal views, not that of my employer, yada yada.

The threats

First up let’s look at what the government can actually do under this bill. I’m going to try and abstract things a little from the text in the bill, but essentially they can:

  • Issue a technical capability notice, which can compel the organization to make technical changes to provide capability to provide a service to government
  • Compel an individual (not necessarily within your organization) to access data
  • Issue a retention notice, which can compel the organization to store data and make it available through some mechanism
  • Covertly undertake equipment interference (potentially with the covert, compelled assistance of someone in the organization, potentially in bulk)

Assuming we’re handling some users’ data, and want to protect their privacy and security as their bits transit the network we operate, what do we now need to consider?

  • We can’t trust any individual internally
  • We can’t trust any small group of individuals fully
  • We can’t trust the entire organization not to become compromised
  • We must assume that we are subject to attempts at equipment interference
  • We must assume that we may be required to retain more data than we need to

So we’re going to end up with a bigger threat surface and more attractors for external threats (all that lovely data). We’ve got to assume individuals may be legally compelled to act against the best interests of the company’s users – this is something any organization has to consider a little bit, but we’ve always been viewing this from a perspective of angry employees the day they quit and so on. We can’t even trust that small groups are not compromised and may either transfer sensitive data or assist in compromise of equipment

Beyond that, we have to consider what happens if an organizational notice is made – what if we’re compelled to retain none-sampled flow data, or perform deep packet inspection and retain things like HTTP headers? How should we defend against all of this, from the perspective of keeping our users safe?

Motivation

To be clear – I am all for targeted surveillance. I believe strongly we should have well funded, smart people in our intelligence services, and that they should operate in a legal fashion, with clear laws that are up to date and maintained. I accept that no society with functional security services will have perfect privacy.

I don’t think the IPB is the right solution, mind you, but this is all to say that there will always be some need for targeted surveillance and equipment interference. These should be conducted only when a warrant is issued (preferably by a judge and cabinet minister), and ISPs should indeed be legally able to assist in these cases, which requires some loss of security and privacy for those targeted users – and it should be only those users.

I am a paid-up member of the Open Rights Group, Liberty and the Electronic Frontiers Foundation. I also regularly attend industry events in the tech sector and ISP sector in particular. Nobody wants to stop our spies from spying where there’s a clear need for them to do so.

However, as with all engineering, it’s a matter of tradeoffs. Bulk equipment interference or bulk data retention is complete overkill and helps nobody. Covert attacks on UK infrastructure actively weaken our security. So how do we go about building a framework that permits targeted data retention and equipment interference in a secure manner?  Indeed, encourages it at an organizational level rather than forcing it to occur in a covert manner?

Equipment Interference

This is the big one, really. Doesn’t matter how it happens – internally compelled employee, cleaner plugging a USB stick from a vague but menacing government agency into a server and rebooting it, switches having their bootloaders flashed with new firmware as they’re shipped to you, or covert network intrusion. Either way you end up in a situation where what your routers, switches, servers etc are doing things you did not expect, almost certainly without your knowledge.

This makes it practically impossible to ensure they are secure, against any threats. Sure, your Catalyst claims to be running IOS 13.2.1. Your MX-series claims to be running JunOS 15.1. Can we verify this? Maybe. We can use host-based intrusion detection systems to monitor integrity and raise alarms.

Now, proper auditing and logging and monitoring of all these devices, coupled with change management etc will catch most of the mundane approaches – that’s just good infosec, and we have to do that to catch all the criminals, script kiddies and random bots trawling the net for vulnerable hosts. Where it gets interesting is how you protect against the sysadmin themselves.

It feels like we need to start implementing m-in-n authorization to perform tasks around sensitive hosts and services. Some stuff we should be able to lock down quite firmly. Reconfiguring firewalls outside of the managed, audited process for doing so using a configuration management (CM) tool? Clearly no need for this, so why should anyone ever be able to do it? All services in CM, be it Puppet/Salt/Chef, with strongly guarded git and puppet repositories and strong authentication everywhere (keys, proper CA w/ signing for CM client/server auth, etc)? Then why would admins ever need to log into machines? Except inevitably someone does  need to, and they’ll need root to diagnose whatever’s gone wrong, even if the fix is in CM eventually.

We can implement 2-person or even 3-person authentication quite easily, even at small scales, using physical tools – hardware security modules locked in 2-key safes, or similar. But it’s cumbersome and complicated, and doesn’t work for small scales where availability is a concern – is the on-call team now 3 people, and are they all in the office all the time with their keys?

There’s a lot that could be done to improve that situation in low to medium security environments, to stop the simple attacks, to improve the baseline for operational security, and crucially to surface any covert attempts at EI conducted by an individual or from outside, covertly. Organizationally, it’d be best for everyone if the organization were aware of modifications that were required to their equipment.

From a security perspective, a technical capability notice or data retention notice of some sort issued to the company or group of people at least means that a discussion can be had internally. The organization may well be able to assist in minimising collateral damage. Imagine: “GCHQ needs to look at detailed traffic for this list of 10 IPs in an investigation? Okay, stick those subscribers in a separate VLAN once they hit the edge switches, route that through the core here and perform the extra logging here for just that VLAN and they’ve got it! Nobody else gets logged!” rather than “hey, why is this Juniper box suddenly sending a few Mbps from its management interface to some IP in Gloucestershire? And does anyone know why the routing engines both restarted lately?”

Data Retention

This one’s actually pretty easy to think about. If it’s legally compelled by a retention or technical capability notice, you must retain as required, and store it as you would your own browser history – in a write-only secure enclave, with vetted staff, ISO27K1 compliant processes (plus whatever CESG requires), complete and utter segmentation from the rest of the business, and whatever “request filter” the government requires stays in there with dedicated, highly monitored and audited connectivity.

What’s that, you say? The government is not willing to pay for all that? The overhead of such a store for most small ISPs (<100,000 customers) would be huge. We’re talking millions if not more per small ISP (ispreview.co.uk lists 237 ISPs in the UK). Substantial office space, probably 5 non-technical and 5 technical staff at minimum, a completely separate network, data diodes from the collection systems, collection systems themselves, redundant storage hardware, development and test environments, backups (offsite, of course – to your second highly secure storage facility), processing hardware for the request filter, and so on. Just the collection hardware might be half a million pounds of equipment for a small ISP. If the government start requiring CESG IL3 or higher, the costs keep going up. The code of practice suggests bulk data might just be held at OFFICIAL – SENSITIVE, though, so IL2 might be enough.

The biggest risk to organizations when it comes to data retention is that the government might not cover your costs – they’re certainly not required to. And of course the fact that you’re the one to blame if you don’t secure it properly and it gets leaked. And the fact that every hacker with dreams of identity theft in the universe now wants to hack you so bad, because you’ve just become a wonderfully juicy repository of information. If this info got out, even for a small ISP, and we’re talking personally-identifiable flow information/IP logs – which is what “Internet Connection Records” look/sound like, though they’re still not defined – then Ashley Madison, TalkTalk and every other “big data breach” would look hilariously irrelevant by comparison. Imagine what personal data you could extract from those 10,000 users at that small ISP! Imagine how many people’s personal lives you could utterly destroy, by outing them as gay, trans or HIV positive, or a thousand other things. All it would take is one tiny leak.

You can’t do anything to improve the security/privacy of your end users – at this point, you’re legally not allowed to stop collecting the data. Secure it properly and did I mention you should write to your MP while the IPB is at committee stage?

If you’ve not been served with a notice: carry on, business as usual, retain as little as possible to cover operational needs and secure it well.

Auditing

Auditing isn’t a thing that happens enough.

I always think that auditing is a huge missed opportunity. We do pair programming and code review in the software world, so why not do terminal session reviews? If X logs into a router, makes 10 changes and logs out, yes we can audit the config changes and do other stateful analysis, but we can audit those commands as a single session. It feels like there’s a tool missing to collate logs from something like syslog and bring them together as a session, and then expose that as a thing people can look over, review, and approve or flag for discussion.

It’s a nice way for people to learn, too – I’ve discovered so many useful tools from watching my colleagues hack away at a server, and about the only way I can make people feel comfortable working with SELinux is to walk them through the quite friendly tools.

Auditing in any case should become a matter of course. Tools like graylog2, ELK and a smorgasbord of others allow you to set up alerts or streams on log lines – start surfacing things like root logins, su/sudo usage, and “high risk” commands like firmware updates, logging configuration, and so on. Stick a display on your dashboards.

Auditing things that don’t produce nice auditable logs is of course more difficult – some firewalls don’t, some appliances don’t. Those just need to be replaced or wrapped in a layer that can be audited. Web interface with no login or command audit trail? Stick it behind a HTTPS proxy that does log, and pull out POSTs. Firewall with no logging capability? Bin it and put something that does in. Come on, it’s 2016.

Technical capability notices and the rest

This is the unfixable. If you get handed a TCN, you basically get to do what it says. You can appeal on the grounds of technical infeasibility, but not proportionality or anything like that. So short of radically decentralizing your infrastructure to make it technically too expensive for the government, you’re kind of stuck with doing what they say.

The law is written well enough to prevent obvious loopholes. If you’re an ISP, you might consider encryption – you could encrypt data at your CPEs, and decrypt it on your edge. You could go a step further and not decrypt it at all, but pass it to some other company you notionally operate extraterritorially, who decrypt it and then send it on its way from there. But these come with potentially huge cost, and in any case the TCN can require you to remove any protection you applied or are in a position to remove if practical.

We can harden infrastructure a little – things like using n-in-m models, DNScrypt for DNS lookups from CPEs, securely authenticating provisioning servers and so on. But there is no technical solution for a policy problem – absolutely any ISP, CSP, or 1-man startup in the UK is as powerless as the next if the government rocks up with a TCN requiring you to store all your customers’ data or to install these black boxes everywhere your aggregation layer connects to the core or whatever.

Effectively, then, the UK industry is powerless to prevent the government from doing whatever the hell it likes, regardless of security or privacy implications, to our networks, hardware and software. We can take some steps to mitigate covert threats or at least give us a better chance of finding them, and we can make some changes which attempt to mitigate against compelled (or hostile) actors internally – there’s an argument that says we should be doing this anyway.

And we can cooperate with properly-scoped targeted warrants. Law enforcement is full of good people, trying to do the right thing. But their views on what the right thing to do is must not dictate political direction and legal implementation while ignoring the technical realities. To do so is to doom the UK to many more years with a legal framework which does not reflect reality, and actively harms the security of millions of end users.

by James Harrison at March 16, 2016 09:49 PM

ardour

Reduced service, March 17th-29th

Starting on March 17th anybody who requires assistance with subscriptions, website registration and so forth will need to wait until the 29th. I (Paul) will be travelling and likely without any internet access during that time. I will survey emails when I get back, but older forum posts will not get scanned, so if you have an issue related to the those things, please send me mail (paul@linuxaudiosystems.com) rather than assume that a forum post will lead to action - it will not.

Friendly users and some developers will likely still answer other questions posted to the forums, so don't feel limited in that respect.

read more

by paul at March 16, 2016 03:32 PM

March 15, 2016

OSM podcast

GStreamer News

GStreamer Core, Plugins, RTSP Server, Editing Services, Python, Validate, VAAPI 1.8.0 release candidate 2 (1.7.91)

The GStreamer team is pleased to announce the second release candidate of the stable 1.8 release series. The 1.8 release series is adding new features on top of the 1.0, 1.2, 1.4 and 1.6 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework.

Binaries for Android, iOS, Mac OS X and Windows will be provided separately during the stable 1.8 release series.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, or gstreamer-vaapi, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, or gstreamer-vaapi.

March 15, 2016 01:00 PM

March 13, 2016

digital audio hacks – Hackaday

A Pi Powered Recording Studio

In the mid-90s, you recorded your band’s demo on a Tascam cassette tape deck. These surprisingly cheap four-track portable studios were just low tech enough to lend an air of authenticity to a band that calls itself, ‘something like Pearl Jam, but with a piano’. These tape decks disappeared a decade later, just like your dreams of being a rock star, replaced with portable digital recording studios.

The Raspberry Pi exists, the Linux audio stack is in much better shape than it was ten years ago, and now it’s possible to build your own standalone recording studio. That’s exactly what [Daniel] is doing for our Raspberry Pi Zero contest, and somewhat predictably he’s calling it the piStudio.

Although the technology has moved from cassette tapes to CompactFlash cards to hard drives, the design of these four-track mini recording studios hasn’t really changed since their introduction in the 1980s. There are four channels, each with a fader, balance, EQ, and a line in and XLR jack. There are master controls, a few VU meters, and if the technology is digital, a pair of MIDI jacks. Since [Daniel] is using a Raspberry Pi for this project, he threw in an LCD for a great user interface.

As with all digital recorders, the money is in the analog to digital converters. [Daniel] is using a 24-bit, 216kHz, four-channel chip, Texas Instruments’ PCM4204. That’s more than enough to confuse the ears of an audiophile, although that much data will require a hard drive. Good thing there will be SATA.

Although you can buy an eight-channel solid state recorder for a few hundred dollars – and [Daniel] will assuredly put more than that into this project, it’s a great application of a ubiquitous Linux computer for a device that’s very, very useful.


Raspberry_Pi_LogoSmall

The Raspberry Pi Zero contest is presented by Hackaday and Adafruit. Prizes include Raspberry Pi Zeros from Adafruit and gift cards to The Hackaday Store!
See All the Entries || Enter Your Project Now!


Filed under: digital audio hacks, Raspberry Pi

by Brian Benchoff at March 13, 2016 08:00 PM

March 11, 2016

ardour

Subscription/Payment Problems (Part 2)

There continue to be issues with our interactions with PayPal over the last several days. PayPal required some small changes to the way things work (good changes, that help with security), but they also changed some minor details that broke our payment processing system in subtle ways.

If you made a payment or tried to set up a subscription in the period March 8th - March 11th at about 15:30h UTC, and things did not work as you expected, please email me at paul@linuxaudiosystems.com and we'll make it right.

The problems are believed to be fixed now. Apologies for the errors and inconvenience.

read more

by paul at March 11, 2016 08:39 PM

Libre Music Production - Articles, Tutorials and News

MiniLAC Berlin 2016

linuxaudio  MiniLAC is a more compact, community-driven version of the yearly Linux Audio Conference.

A place to talk to developers of your favorite Linux (capable) audio software, music producers, developers coming together, thinking about the future shape of Linux Audio.

 

 

by yassinphilip at March 11, 2016 05:15 AM

DFasma 1.4.4 released

DFasma is a free open-source software which is used to compare audio files in time and frequency. The comparison is first visual, using wavforms and spectra. It is also possible to listen to time-frequency segments in order to allow perceptual comparison. It is basically dedicated to analysis. Even though there are basic functionnalities to align the signals in time and amplitude, this software does not aim to be an audio editor.

by yassinphilip at March 11, 2016 04:27 AM