planet.linuxaudio.org

July 26, 2016

OSM podcast

July 25, 2016

Libre Music Production - Articles, Tutorials and News

Autotuning & pitch correction with Zita-AT1 in Ardour

Autotuning & pitch correction with Zita-AT1 in Ardour

Be it for correcting those slightly out-of-tune notes from your singer, or going all the way to a Cher effect, an auto-tune plugin might come in handy. There’s not a lot of those designed for Linux, though choices do exist :

by Conor at July 25, 2016 08:20 AM

July 24, 2016

News – Ubuntu Studio

Ubuntu Studio 16.04.1 Released

A new point release of the Xenial Xerus LTS has been released. As usual, this point release includes many updates, and updated installation media has been provided so that fewer updates will need to be downloaded after installation. These include security updates and corrections for other high-impact bugs. Please see the 16.04.1 change summary for […]

by Set Hallstrom at July 24, 2016 09:55 AM

July 19, 2016

open-source – CDM Create Digital Music

Feel the beat on a Magic Trackpad or MacBook with free tool

Don’t like clicks or beeps or other sounds when using a metronome? Try some haptic feedback instead, with this free utility.

First, you’ll need an Apple trackpad that supports haptic feedback. Pretty soon, I suspect that will be all the new MacBooks – most of the line is badly in need of an update (another story there). For now, it’s the 2013 MacBook Pro, and so-called “New MacBook.”

Alternatively, you can use the Magic Trackpad 2. That’s perhaps the best option, because it’s wireless and you can position it anywhere you like – say, atop your keyboard or next to your Maschine.

Then, fire up this free utility, direct MIDI to the app, and you’ll feel as if someone is tapping you with the beat. No annoying sounds anywhere – perfect.

Since it listens to MIDI Clock, you can use any source, from Ableton Live (in turn synced to Ableton Link) to hardware (if it’s connected to your computer). It uses start/stop events to make sure it’s on the beat, then taps you on quarter notes.

The app is open source if anyone wants to check out the code. And you’ll find complete instructions. (Don’t download from the links at the top of the page; look at the beginning of the documentation for a ready-to-run app.)

https://github.com/faroit/magiclock

Genius.

magiclock

Next, Apple Watch? (Also with “Taptic Engine™” support.) There are some entries out there, like this one, though they seem to be slightly hampered by the current restrictions on apps from Apple. (I like my Pebble, too!)

The haptic feedback-specialized Basslet, upcoming after a Kickstarter campaign, might actually be the best bet – and I could see people who didn’t buy into the music listening application still buying it for this.

The post Feel the beat on a Magic Trackpad or MacBook with free tool appeared first on CDM Create Digital Music.

by Peter Kirn at July 19, 2016 01:55 PM

July 18, 2016

Pid Eins

REMINDER! systemd.conf 2016 CfP Ends in Two Weeks!

Please note that the systemd.conf 2016 Call for Participation ends in less than two weeks, on Aug. 1st! Please send in your talk proposal by then! We’ve already got a good number of excellent submissions, but we are interested in yours even more!

We are looking for talks on all facets of systemd: deployment, maintenance, administration, development. Regardless of whether you use it in the cloud, on embedded, on IoT, on the desktop, on mobile, in a container or on the server: we are interested in your submissions!

In addition to proposals for talks for the main conference, we are looking for proposals for workshop sessions held during our Workshop Day (the first day of the conference). The workshop format consists of a day of 2-3h training sessions, that may cover any systemd-related topic you'd like. We are both interested in submissions from the developer community as well as submissions from organizations making use of systemd! Introductory workshop sessions are particularly welcome, as the Workshop Day is intended to open up our conference to newcomers and people who aren't systemd gurus yet, but would like to become more fluent.

For further details on the submissions we are looking for and the CfP process, please consult the CfP page and submit your proposal using the provided form!

And keep in mind:

REMINDER: Please sign up for the conference soon! Only a limited number of tickets are available, hence make sure to secure yours quickly before they run out! (Last year we sold out.) Please sign up here for the conference!

AND OF COURSE: We are also looking for more sponsors for systemd.conf! If you are working on systemd-related projects, or make use of it in your company, please consider becoming a sponsor of systemd.conf 2016! Without our sponsors we couldn't organize systemd.conf 2016!

Thank you very much, and see you in Berlin!

by Lennart Poettering at July 18, 2016 10:00 PM

July 16, 2016

digital audio hacks – Hackaday

Hacklet 116 – Audio Projects

If the first circuit a hacker builds is an LED blinker, the second one has to be a noisemaker of some sort. From simple buzzers to the fabled Atari punk console, and guitar effects to digitizing circuits, hackers, makers and engineers have been building incredible audio projects for decades. This week the Hacklet covers some of the best audio projects on Hackaday.io!

vumeterWe start with [K.C. Lee] and Automatic audio source switching. Two audio sources, one amplifier and speaker system; this is the problem [K.C. Lee] is facing. He listens to audio from his computer and TV, but doesn’t need to have both connected at the same time. Currently he’s using a DPDT switch to change inputs. Rather than manually flip the switch, [K.C. Lee] created this project to automatically swap sources for him. He’s using an STM32F030F4 ARM processor as the brains of the operation. The ADCs on the microcontroller monitor both sources and pick the currently active one. With all that processing power, and a Nokia LCD as an output, it would be a crime to not add some cool features. The source switcher also displays a spectrum analyzer, a VU meter, date, and time. It even will attenuate loud sources like webpages that start blasting audio.

 

muzzNext up is [Adam Vadala-Roth] with Audio Blox: Experiments in Analog Audio Design. [Adam] has 32 projects and counting up on Hackaday.io. His interests cover everything from LEDs to 3D printing to solar to hydroponics. Audio Blox is a project he uses as his engineer’s notebook for analog audio projects. It is a great way to view a hacker figuring out what works and what doesn’t. His current project is a 4 board modular version of the Big Muff Pi guitar pedal. He’s broken this classic guitar effect down to an input board, a clipping board, a tone control, and an output stage. His PCB layouts, schematics, and explanations are always a treat to view and read!

pauldioNext we have [Paul Stoffregen] with Teensy Audio Library. For those not in the know, [Paul] is the creator of the Teensy family of boards, which started as an Arduino on steroids, and has morphed into something even more powerful. This project documents the audio library [Paul] created for the Freescale/NXP ARM processor which powers the Teensy 3.1. Multiple audio files playing at once, delays, and effects, are just a few things this library can do. If you’re new to the audio library, definitely check out [Paul’s] companion project
Microcontroller Audio Workshop & HaD Supercon 2015. This project is an online version of the workshop [Paul] ran at the 2015 Hackaday Supercon in San Francisco.

drdacFinally we have [drewrisinger] with DrDAC USB Audio DAC. DrDac is a high quality DAC board which provides a USB powered audio output for any PC. Computers these days are built down to a price. This means that lower quality audio components are often used. Couple this with the fact that computers are an electrically noisy place, and you get less than stellar audio. Good enough for the masses, but not quite up to par if you want to listen to studio quality audio. DrDAC houses a PCM2706 audio DAC and quality support components in a 3D printed case. DrDAC was inspired by [cobaltmute’s] pupDAC.

If you want to see more audio projects and hacks, check out our new audio projects list. See a project I might have missed? Don’t be shy, just drop me a message on Hackaday.io. That’s it for this week’s Hacklet, As always, see you next week. Same hack time, same hack channel, bringing you the best of Hackaday.io!


Filed under: digital audio hacks, Hackaday Columns

by Adam Fabio at July 16, 2016 05:01 PM

July 15, 2016

digital audio hacks – Hackaday

Baby Monitor Rebuild is also ESP8266 Audio Streaming How-To

[Sven337]’s rebuild of a cheap and terrible baby monitor isn’t super visual, but it has so much more going on than it first seems. It’s also a how-to for streaming audio via UDP over WiFi with a pair of ESP8266 units, and includes a frank sharing of things that went wrong in the process and how they were addressed. [Sven337] even experimented with a couple of different methods for real-time compression of the transmitted audio data, for no other reason than the sake of doing things as well as they can reasonably be done without adding parts or spending extra money.

receiverThe original baby monitor had audio and video but was utterly useless for a number of reasons (French).  The range and quality were terrible, and the audio was full of static and interference that was just as loud as anything the microphone actually picked up from the room. The user is left with two choices: either have white noise constantly coming through the receiver, or be unable to hear your child because you turned the volume down to get rid of the constant static. Our favorite part is the VOX “feature”: if the baby is quiet, it turns off the receiver’s screen; it has no effect whatsoever on the audio! As icing on the cake, the analog 2.4GHz transmitter interferes with the household WiFi when it transmits – which is all the time, because it’s always-on.

Small wonder [Sven337] decided to go the DIY route. Instead of getting dumped in the trash, the unit got rebuilt almost from the ground-up.

inside_full_2Re-using the enclosures meant that the DIY rebuild was something that looked as good as it worked. After all, [Sven337] didn’t want a duct-taped hack job in the nursery. But don’t let the ugly mess inside the enclosure fool you – there is a lot of detail work in this build. The inside may be a mess of wires and breakout boards, but it’s often a challenge to work within the space constraints of fitting a project into some other device’s enclosure.

The ESP8266 works but is not a completely natural fit for an audio baby monitor, as it lacks a quality ADC and DAC. But on the other hand it is cheap, it is easy to use, and it has plenty of processing power. These attributes are the reason the ESP8266 has made its way into so many projects, including household gadgets like this WiFi webcam.


Filed under: digital audio hacks, how-to

by Donald Papp at July 15, 2016 11:00 PM

July 14, 2016

Libre Music Production - Articles, Tutorials and News

LMP Asks #20: An interview with Marius Stärk

LMP Asks #20: An interview with Marius Stärk

This month LMP Asks talks to Marius Stärk, Linux enthusiast and musician who produces all his music with FLOSS tools.

Hi Marius, thank you for taking the time to do this interview. Where do you live, and what do you do for a living?

My name is Marius Stärk, I'm 28 years old and I live in the city of Aachen, a medium-sized city at Germany's western border, adjacent to Belgium and the Netherlands.

by Conor at July 14, 2016 03:02 PM

July 12, 2016

GStreamer News

GStreamer Core, Plugins, RTSP Server, Editing Services, Python, Validate 1.9.1 unstable release (binaries)

Pre-built binary images of the 1.9.1 unstable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

July 12, 2016 12:00 AM

July 07, 2016

News – Ubuntu Studio

Backports, the benefits and the consequences.

Ubuntu Studio is happy to announce that backports are going to be rolling out soon and the first one will be Ardour. Backports are newer versions of applications, ported back to stable versions of the system. For example in the case of Ardour, Ubuntu Studio users running 14.04 or 16.04 will be able to have […]

by Set Hallstrom at July 07, 2016 10:11 AM

Libre Music Production - Articles, Tutorials and News

July 2016 Newsletter out now - Interviews, News and more

Our newsletter for July is now sent to our subscribers. If you have not yet subscribed, you can do so from our start page.

You can also read the latest issue online. In it you will find:

  • 3 new 'LMP Asks' interviews
  • News
  • New software release announcements

and more!

by admin at July 07, 2016 12:27 AM

July 06, 2016

GStreamer News

GStreamer Core, Plugins, RTSP Server, Editing Services, Python, Validate, VAAPI, OMX 1.9.1 unstable release

The GStreamer team is pleased to announce the first release of the unstable 1.9 release series. The 1.9 release series is adding new features on top of the 1.0, 1.2, 1.4, 1.6 and 1.8 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework. The unstable 1.9 release series will lead to the stable 1.10 release series in the next weeks. Any newly added API can still change until that point.

Binaries for Android, iOS, Mac OS X and Windows will be provided in the next days.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

July 06, 2016 12:00 PM

July 01, 2016

ardour

What's coming in Ardour 5.0

It has been longer than usual since the last release of Ardour, and I wanted to give people a sense of the amazing stuff that we're soon going to be releasing as part of Ardour 5.0. We don't have a release date yet, but things are rapidly shaping up and we hope to see 5.0 greet the world in the next few weeks. Read on below for some of the highlights ...

read more

by paul at July 01, 2016 11:45 AM

digital audio hacks – Hackaday

1024 “Pixel” Sound Camera Treats Eyes to Real-Time Audio

A few years ago, [Artem] learned about ways to focus sound in an issue of Popular Mechanics. If sound can be focused, he reasoned, it could be focused onto a plane of microphones. Get enough microphones, and you have a ‘sound camera’, with each microphone a single pixel.

Movies and TV shows about comic books are now the height of culture, so a device using an array of microphones to produce an image isn’t an interesting demonstration of FFT, signal processing, and high-speed electronic design. It’s a Daredevil camera, and it’s one of the greatest builds we’ve ever seen.

[Artem]’s build log isn’t a step-by-step process on how to make a sound camera. Instead, he went through the entire process of building this array of microphones, and like all amazing builds the first step never works. The first prototype was based on a flatbed scanner camera, simply a flatbed scanner in a lightproof box with a pinhole. The idea was, by scanning a microphone back and forth, using the pinhole as a ‘lens’, [Artem] could detect where a sound was coming from. He pulled out his scanner, a signal generator, and ran the experiment. It didn’t work. The box was not soundproof, the inner chamber should have been anechoic, and even if it worked, this camera would only be able to produce an image or two a minute.

back8×8 microphone array (mics on opposite side) connected to Altera FPGA at the center

The idea sat in the shelf of [Artem]’s mind for a while, and along the way he learned about FFT and how the gigantic Duga over the horizon radar actually worked. Math was the answer, and by using FFT to transform a microphones signals from up-and-down to buckets of frequency and intensity, he could build this camera.

That was the theory, anyway. Practicality has a way of getting in the way, and to build this gigantic sound camera he would need dozens of microphones, dozens of amplifiers, and a controller with enough analog pins, DACs, and processing power to make sense of all of this.

This complexity collapsed when [Artem] realized there was an off-the-shelf part that was a perfect microphone camera pixel. MEMS microphones, like the kind found in smartphones, take analog sound and turn it into a digital signal. Feed this into a fast enough microcontroller, and you can perform FFT on the signal and repeat the same process on the next pixel. This was the answer, and the only thing left to do was to build a board with an array of microphones.

4x4[Artem]’s camera microphone is constructed out of several modules, each of them consisting of an 8×8 array of MEMS microphones, controlled via FPGA. These individual modules can be chained together, and the ‘big build’ is a 32×32 array. After a few problems with manufacturing, the board actually worked. He was recording 64 channels of audio from a single panel. Turning on the FFT visualization and pointing it at a speaker revealed that yes, he had indeed made a sound camera.
The result is a terribly crude movie with blobs of color, but that’s the reality of a camera that only has 32×32 resolution. Right now the sound camera works, the images are crude, and [Artem] has a few ideas of where to go next. A cheap PC is fast enough to record and process all the data, but now it’s an issue of bandwidth; 30 sounds per second is a total of 64 Mbps of data. That’s doable, but it would need another FPGA implementation.

Is this sonic vision? Yes, technically the board works. No, in that the project is stalled, and it’s expensive by any electronic hobbyist standards. Still, it’s one of the best to grace our front page.

[Thanks zakqwy for the tip!]


Filed under: digital audio hacks, FPGA, slider

by Brian Benchoff at July 01, 2016 08:01 AM

June 23, 2016

OSM podcast

rncbc.org

Qtractor 0.7.8 - The Snobby Graviton is out!


So it's first solstice'16...

The world sure is a harsh mistress... yeah, you read that right! Heinlein's Moon have been just intentionally rephrased. Yeah, whatever.

Just about when the UK vs. EU is there under close scrutiny and sizzling winds of trumpeting (pun intended, again) coming from the other side of the pond, we all should mark the days we're living in.

No worries: we still have some feeble but comforting news:

Qtractor 0.7.8 (snobby graviton) is out!

Nevertheless ;)

Qtractor is an audio/MIDI multi-track sequencer application written in C++ with the Qt framework. Target platform is Linux, where the Jack Audio Connection Kit (JACK) for audio and the Advanced Linux Sound Architecture (ALSA) for MIDI are the main infrastructures to evolve as a fairly-featured Linux desktop audio workstation GUI, specially dedicated to the personal home-studio.

Change-log:

  • MIDI file track names (and any other SMF META events) are now converted to and from the base ASCII/Latin-1 encoding, as much to prevent invalid SMF whenever non-Latin-1 UTF-8 encoded MIDI track names are given.
  • MIDI file tempo-map and location markers import/export is now hopefully corrected, after almost a decade in mistake, regarding MIDI resolution conversion, when different than current session's setting (TPQN, ticks-per-quarter-note aka. ticks-per-beat, etc.)
  • Introducing LV2 UI Show interface support for other types than Qt, Gtk, X11 and lv2_external_ui.
  • Prevent any visual updates while exporting (freewheeling) audio tracks that have at least one plugin activate state automation enabled for playback (as much for not showing messages like "QObject::connect: Cannot queue arguments of type 'QVector'"... anymore).
  • The common buses management dialog (View/Buses...) sees the superfluous Refresh button finally removed, while two new button commands take its place: (move) Up and Down.
  • LV2 plug-in Patch support has been added and LV2 plug-ins parameter properties manipulation is now accessible on the generic plug-in properties dialog.
  • Fixed a recently introduced bug, that rendered all but one plug-in instance to silence, affecting only DSSI plug-ins which implement DSSI_Descriptor::run_multiple_synths() eg. fluidsynth-dssi, hexter, etc.

Website:

http://qtractor.sourceforge.net

Project page:

http://sourceforge.net/projects/qtractor

Downloads:

http://sourceforge.net/projects/qtractor/files

Git repos:

http://git.code.sf.net/p/qtractor/code
https://github.com/rncbc/qtractor.git
https://gitlab.com/rncbc/qtractor.git
https://bitbucket.org/rncbc/qtractor.git

Wiki (on going, help stillwanted, always!):

http://sourceforge.net/p/qtractor/wiki/

License:

Qtractor is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Flattr this

 

Enjoy && Have (lots of) fun.

by rncbc at June 23, 2016 06:00 PM

Nothing Special

Room Treatment and Open Source Room Evaluation

Its hard to improve something you can't measure.

My studio space is much much too reverberant. This is not surprising since its a basement room with laminate flooring and virtually no soft, absorbant surfaces at all. I planned to add acoustic treatment from the get go, but funding made me wait until now. I've been recording doing DI guitars, drum samples, and synth programming, but nothing acoustic yet until the room gets tamed a little bit.



(note: I get pretty explanatory about why bass traps matter in the next several paragraphs. If you only care about the measurement stuff, skip to below the pictures.)

Well, how do we know what needs taming? First there are some rules of thumb. My room is about 13'x11'x7.5' which isn't an especially large space. This means that sound waves bouncing off the walls will have some strong resonances at 13', 11', and 7.5' wavelengths which equates to about 86Hz, 100Hz, and 150Hz respectively. There will be many more resonances, but these will be the strongest ones. These will become standing waves where the walls just bounce the acoustic energy back and forth and back and forth and back and forth... Not forever, but longer than the other frequencies in my music.

For my room, these are very much in the audible spectrum so this acoustic energy hanging around in the room will be covering other stuff I want to hear (for a few hundred extra ms) while mixing. In addition to these primary modes there will also be resonances at 2x, 3x, 4x, etc. of these frequencies. Typically the low end is where it tends to get harder to hear what's going on, but all the reflections add up to the total reverberance which is currently a bit too much for my recording.

Remember acoustic waves are switching (or waving even) between high pressure/low speed and low pressure/high speed. Where the high points lie depends on the wavelength (and the location of the sound source). At the boundaries of the room, the air carrying the primary modes' waves (theoretically) doesn't move at all. That means the pressure is the highest there. At the very middle of the room you have a point where air carrying these waves is moving the fastest. Of course the air is usually carrying lots of waves at the same time so how its moving/pressurized in the room is hard to predict exactly.

With large wavelengths like the ones we're most worried about, you aren't going to stop them with a 1" thick piece of foam hung on the wall (no matter how expensive it was). You need a longer space to act on the wave and trap more energy. With small rooms more or less the only option is through porous absorbers which basically take acoustic energy out of the room when air carrying the waves tries to move through the material of the treatment. Right against the wall air is not moving at all, so putting material there isn't going to be very effective for the standing waves. And only 1" of material isn't going to act on very much air. So you need volume of material and you need to put it in the right place.

Basically thicker is better to stop these low waves.  If you have sufficient space in your room put a floor-to-ceiling 6' deep bass trap. But most of us don't have that kind of space to give up. The thicker the panel the less dense of material you should use. Thick traps will also stop higher frequencies, so basically, just focus on the low stuff and the higher will be fine. Often if the trap is not in a direct reflecting point from the speaker then its advised to glue kraft paper to the material which bounces some of the ambient high end around the room so its not too dead. How dead is too dead? How much high end does each one bounce? I don't know. It's just a rule of thumb. The rule for depth is quarter wavelength. An 11' wave really will be stopped well by a 2.75' thick trap. This thickness guarantees that there will be some air moving somewhere through the trap even if you put it right in the null. Do you have a couple extra feet of space to give up all around the room? Me neither. But we'll come back to that. Also note that more surface area is more important than thickness. Once you've covered enough wall/floor/ceiling, then the next priority is thickness.

Next principle is placement. You can place treatment wherever you want in the room but some places are better than others. Right against the wall is ok because air is moving right up until the wall, but it will be better if there is a little gap, because the air is moving faster a little further from the wall. So we come back to the quarter wavelength rule. The most effective placement of a panel is spaced equal to its thickness. So a 3" panel is best 3" away from the wall. This effectively doubles the thickness of your panel. Thus we see placement and thickness are related. Now your 3" panel is acting like its 6" damping pretty effectively down to 24" waves (~563Hz). It also works well on all shorter waves. Bass traps are really broadband absorbers. But... 563Hz is a depressingly high frequency when we're worried about 80Hz. This trap will do SOMETHING to even 40Hz waves, but not a whole lot. What do we do if our 13' room mode is causing a really strong resonance?

You can move your trap further into the room. This makes it so there is a gap in the absorption curve, but it makes the absorption go lower. So move the 3" panel to have a 6" gap and you won't be as effective at absorbing 563Hz but now it works much better on 375Hz. You are creating a tuned trap. It still works some on 563Hz but the absorption curve will have a low point then a bump at 375. Angling the trap so the gap varies can help smooth this response making it absorb more frequencies, but less effectively for specific ones. So tradeoff smooth curve for really absorbing a lot of energy at a specific frequency if you need.

The numbers here are pretty thoretical. Even though the trap is tuned to a certain frequency a lot of other frequencies will get absorbed. Some waves will enter at angles which makes it seem thicker. Some waves will bounce off. Some waves will diffract (bend) around the trap somewhat. There are so many variables that its very difficult to predict acoustics precisely. But these rules of thumb are applicable in most cases.

Final thing to discuss is what material? Its best to find one that has been tested with published numbers because you have a good idea if and how it will work. Mineral wool is a fibrous material that resists air passing through. Fiberglass insulation can work too. Rigid fiberglass Owens Corning 703 is the standard choice but mineral wool is cheaper and just as effective so its becoming more popular. Both materials (and there are others) come in various densities, and the idea comes into play that thicker means less dense. This is because if it's too dense acoustic waves could bounce back out on their way through rather than be absorbed.

Man. I didn't set out to give a lecture on acoustics, but its there and I'm not deleting it. I do put the bla in blog, remember? There's a lot more (and better) reading you can do at an acoustic expert's site.

For me and my room (and my budget) I started out building two 9" deep 23" wide floor to ceiling traps for the two corners I have access to (The other 2 corners are blocked by the door and my wife's sewing table). These will be stuffed with Roxul Safe and Sound (SnS) which is a lower density mineral wool. Its available on Lowes online, but it was cheaper to find a local supplier to special order it for me.


Roxul compresses it in the packaging nicely

I will build a 6"x23" panel using whatever's left and will place it behind the listening position. I also ordered a bag of the denser Roxul Rockboard 60 (RB60). I'm still waiting for it to come in (rare stuff to find in little Logan UT, but I found a supplier kind enough to order it and let me piggy back on their shipping container so I'm not paying any shipping, thanks Building Specialties!). I will also build four 4"x24"x48" panels out of Roxul Rockboard 60 (when it finally arrives) which is a density that more or less matches the performance of OC703.  These will be hung on the walls at the first reflecting points and ceiling corners. Next year or so when I have some more money I plan to buy a second bag of the rockboard which will hopefully be enough treatment to feel pretty well done. I considered using the 2" RB60 panels individually so I can cover more surface (which is the better thing acoustically), but in the end I want 4" panels and I don't know if it will be feasible to rebuild these later to add thickness.
my stack of flashing

I more or less followed Steven Helm's method with some variations. The stuff he used isn't very available so I bought some 20 gauge 1.5" galvanized L-framing or angle flashing from the same local supply shop who got me . They had 25ga. but I was worried it would be too flimsy, considering even on the rack a lot of it got bent. I just keep envisioning my kids leaning against them or something and putting a big dent on the side. After buying I worried it would be too heavy, but now after the build, I think for my towering 7.5' bass traps, the thicker material was a good choice. For the smaller 2'x4' panels that are going to be hung up, I'm not sure yet.

I chose not to do a wood trap because I thought riveting would be much faster than nailing where I don't have a compressor yet. Unfortunately I didn't forsee how long it can take to drill through 20ga steel. I found after the first trap its much faster to punch a hole with a nail then drill it to the rivet size. Its nice when you have something to push against (a board underneath) but where I was limited on workspace I sometimes had to drill sideways. A set of vice-grip pliers really made that much easier.


Steven's advice about keeping it square is very good, something I didn't do the best at on the first trap, but not too far off either. They key is using the square to keep your snips cutting squarely. Also since my frame is so thick it doesn't bend very tightly, so I found it useful to take some pliers and twist the corner a bit to square it up.
Corner is a bit round

a bit tighter corner now
 Since my traps are taller than as single SnS panel I had to stack them and cut a 6" off the top. A serrated knife works best for cutting this stuff but I didn't have an old one around, so I improvised one from some scrap sheet metal.

 I staggered the seams to try to make a more homogenous material.


With all the interior assembled I think the frames actually look good enough you could keep them on the outside, but my wife preferred the whole thing be wrapped in fabric. I don't care either way.


Before covering though I glued on some kraft paper using spray adhesive. I worked from top to bottom, but some of them got a bit wrinkled.




The paper was a bit wider than the frame, so I cut around the frame and stuffed it behind a bit, so it has a tidier look.





I'd say they look pretty darn good even without fabric!




Anyway, so all that acoustic blabber above boils down to the fact that even following rules of thumb, the best thing to do is measure the room before and after treatment to see what needs to be treated and how well your treatment did. If its good leave it, if its bad you can add more or try to move it around to address where its performing poorly.

So as measuring is important, and I'm kinda a stickler for open source software I will show you today how to do it. The de-facto standard for measurement is the Room Eq Wizard (REW) freeware program. Its free but not libre, so I decided to use what was libre. Full disclosure: I installed REW and tried it but couldn't ever get sound to come out of it, so that helped motivate the switch. I was impressed REW had a linux installer, but I couldn't find any answers on getting sound out. Its java based, not JACK capable, so it couldn't talk to my firewire soundcard. REW is very good, but for the freedom idealists out there we can use Aliki.

The method is the same in both, generate a sweep of sine tones with your speakers, record the room's response with your mic, and do some processing that creates an impulse response for your room. An impulse signal is a broadband signal that contains all frequencies equally for a very very (infinitely short) amount of time. True impulses are difficult to generate so its easier to just send the frequencies one at a time then combine them with some math. I've talked a little about measuring impulse responses before. The program I used back then (qloud) isn't compiling easily for me these days because it hasn't been updated for modern QT libraries and Aliki is more tuned for room measurement vs. loudspeaker measurement.

I am most interested in 2 impulse responses: 1. the room response between my monitors and my ears while mixing, and 2. the room response between my instruments and the mic. Unfortunately I can't take my monitors or my mic out of the measurement because I don't have anything else to generate or record the sine sweeps with. So each measurement will have these parts of my signal chain's frequency response convolved in too, but I think they are flat enough to get an idea and they'll be consistent for before and after treatment comparisons. I don't have a planned position for where I will be recording in this room but the listening position won't be moving so I'm focused on response 1.

The Aliki manual linked above is pretty good. For the most part I'm not going to rehearse it here. You make select a project location, and I found that anywhere but your home directory didn't work. It makes 4 folders in that location to store different audio files: sweep, capture, impulse, and edited files.

We must first make a sweep, so click the sweep button. I'm going from 20Hz to 22000Hz. May as well see the full range, no? A longer sweep can actually reduce the noise of the measurement, so I went a full 15 seconds. This generates an audio file with the sweep in it in the sweep folder. Aliki stores everything as .ald files, basically a wav with a simpler header I think.

Next step: capture. Set up your audio input and output ports, and pick your sweep file for it to play. Use the test to get your levels. I found that even with my preamps cranked the levels were low coming in from my mic. It was night so I didn't want to play it much louder. You can edit the captures if you need. Each capture makes a new file or files in the capture directory.

I did this over several days because I measured before treatment, then with the traps in place before the paper was added and again after the paper was glued on. Use the load function to get your files and it will show them in the main window. Since my levels were low I went ahead and misused the edit functions to add gain to the capture files so they were somewhat near full swing.

Next step is the convolution to remove the sweep and calculate the impulse response. Select the sweep file you used, set the end time to be longer than your sweep was and click apply and it should give you the impulse response. Be aware that if your levels are low like mine were, you'll only get the tiniest blip of waveform near zero. Save that as a new file and then go to edit.

In edit, you'll likely need to adjust the gain, but you can also adjust the length, and in the end you have a lovely impulse response that you can export to a .wav file that you can listen to (though its not much to listen to) or more practically: use in your favorite impulse response like IR or klangfalter.

But we don't want to use this impulse for convolving signals with. We can already get that reverb by just playing an instrument in our room! We want to analyze the impulse response to see if there's improvement or if something still needs to be changed. So this is where I imported the IR wav files into GNU Octave.

I wrote a few scripts to help out, namely: plotIREQ and plotIRwaterfall. They can be found in their git repository. I also made fftdecimate which smooths it out from the raw plotIREQ plot:



to this:

I won't go through the code in too much detail. If you'd like me to, leave a comment and I'll do another post. But look at plotMyIRs.m for useage examples of how I generated these plots.


You can see the big bump from around 150hz to 2khz. And a couple big valleys at 75hz, 90hz, 110hz etc. One thing I decided from looking at these is that the subwoofer should be turned up a bit, since my Blue Sky Exo2's crossover at around 150hz, and everything below that measured rather low.

I was hoping for a smoother result, especially in the low end, but I plan to build more broadband absorbers for the first reflection points. While a 4" thick panel doesn't target the really low end like these bass traps, they do have some effect, even on the very low frequencies. So I hope they'll have a cumulative effect down on that lower part of the graph.


The other point that I'd like to comment on is that the paper didn't seem to make much of a difference. Its possible that since it wasn't factory glued onto the rockwool it lacks a sufficient bond to transfer the energy properly. It doesn't seem to hurt the results too much either, in fact around 90hz it seems like it actually makes the response smoother, so I don't plan to remove it (yet at least).

The last plots I want to look at is the waterfall plots. These show how the frequencies are responding in time so you will see if any frequencies are ringing/resonating and need better treatment.


Here we see some anomolies. Just comparing the first and final plots, its easy to see that nearly every frequency decays much more quickly (we're focused on the lower region 400hz and below, since thats where the rooms primary modes lie). You also see a long resonance somewhere around 110hz that still isn't addressed, which is probably the next target. I can try to move the current traps out from the wall and see if that helps, or make a new panel and try to tune it.

Really though I'm probably going to wait until I've built the next set of panels.
Hope this was informative and useful. Try out those octave scripts. And please comment!

by Spencer (noreply@blogger.com) at June 23, 2016 03:10 PM

June 20, 2016

open-source – CDM Create Digital Music

A composition you can only hear by moving your head

“It’s almost like there’s an echo of the original music in the space.”

After years of music being centered on stereo space and fixed timelines, sound seems ripe for reimagination as open and relative. Tim Murray-Browne sends us a fascinating idea for how to do that, in a composition in sound that transforms as you change your point of view.

Anamorphic Composition (No. 1) is a work that uses head and eye tracking so that you explore the piece by shifting your gaze and craning your neck. That makes for a different sort of composition – one in which time is erased, and fragments of sound are placed in space.

Here’s a simple intro video:

Anamorphic Composition (No. 1) from Tim Murray-Browne on Vimeo.

I was also unfamiliar with the word “anamorphosis”:

Anamorphosis is a form which appears distorted or jumbled until viewed from a precise angle. Sometimes in the chaos of information arriving at our senses, there can be a similar moment of clarity, a brief glimpse suggestive of a perspective where the pieces align.

Tech details:

The head tracking and most of the 3D is done in Cinder using the Kinect One. This pipes OSC into SuperCollider which does the sounds synthesis. It’s pretty much entirely additive synthesis based around the harmonics of a bell.

I’d love to see experiments with this via acoustically spatialized sound, too (not just virtual tracking). Indeed, this question came up in a discussion we hosted in Berlin in April, as one audience member talked about how his perception of a composition changed as he tilted his head. I had a similar experience taking in the work of Tristan Perich at Sónar Festival this weekend (more on that later).

On the other hand, virtual spaces will present still other possibilities – as well as approaches that would bend the “real.” With the rise of VR experiences in technology, the question of point of view in sound will become as important as point of view in image. So this is the right time to ask this question, surely.

Something is lost on the Internet, so if you’re in London, check out the exhibition in person. It opens on the 27th:

http://timmb.com/anamorphic-composition-no-1/

The post A composition you can only hear by moving your head appeared first on CDM Create Digital Music.

by Peter Kirn at June 20, 2016 04:30 PM

Libre Music Production - Articles, Tutorials and News

LMP Asks #19: An interview with Vladimir Sadovnikov

LMP Asks #19: An interview with Vladimir Sadovnikov

This month LMP Asks talks to Vladimir Sadovnikov, programmer and sound engineer, about his project, LSP plugins, which aims to bring new, non existing plugins to Linux. As well as the LSP plugin suite, Vladimir has also contributed to other Linux audio projects such as Calf Studio Gear and Hydrogen.

by Conor at June 20, 2016 12:49 PM

June 18, 2016

Libre Music Production - Articles, Tutorials and News

Check out 'Why, Phil?', new Linux audio webshow series

Check out 'Why, Phil?', new Linux audio webshow series

Philip Yassin has recently started an upbeat Linux audio webshow series called 'Ask Phil?'. Only recently started, the series has already notched up an impressive 7 episodes, most of which revolve around Phil's favourite DAW, Qtractor.

by Conor at June 18, 2016 06:45 PM

The "Gang of 3" is loose again

The

The Vee One Suite aka. the gang of three old-school homebrew software instruments, respectively synthv1, as a polyphonic subtractive synthesizer, samplv1, a polyphonic sampler synthesizer and drumkv1 as one another drum-kit sampler, are here released once again, now in their tenth reincarnation.

by yassinphilip at June 18, 2016 03:25 PM

June 17, 2016

GStreamer News

GStreamer Core, Plugins, RTSP Server, Editing Services, Validate 1.8.2 stable release (binaries)

Pre-built binary images of the 1.8.2 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

June 17, 2016 02:00 PM

June 16, 2016

rncbc.org

Vee One Suite 0.7.5 - The Tenth beta is out!


Hiya!

The Vee One Suite aka. the gang of three old-school homebrew software instruments, respectively synthv1, as a polyphonic subtractive synthesizer, samplv1, a polyphonic sampler synthesizer and drumkv1 as one another drum-kit sampler, are here released once again, now in their tenth reincarnation.

All available in dual form:

  • a pure stand-alone JACK client with JACK-session, NSM (Non Session management) and both JACK MIDI and ALSA MIDI input support;
  • a LV2 instrument plug-in.

The esoteric change-log goes like this:

  • LV2 Patch property parameters and Worker/Schedule support are now finally in place, allowing for sample file path selections from generic user interfaces (applies to samplv1 and drumkv1 only).
  • All changes to most continuous parameter values are now smoothed to a fast but finite slew rate.
  • All BPM sync options to current transport (Auto) have been refactored to new special minimum value (which is now zero).
  • In compliance to the LV2 spec. MIDI Controllers now affect cached parameter values only, via shadow ports, instead of input control ports directly, mitigating their read-only restriction.
  • Make sure LV2 plug-in state is properly reset on restore.
  • Dropped the --enable-qt5 from configure as found redundant given that's the build default anyway (suggestion by Guido Scholz, while for Qtractor, thanks).

The Vee One Suite are free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

And then again!

synthv1 - an old-school polyphonic synthesizer

synthv1 0.7.5 (tenth official beta) is out!

synthv1 is an old-school all-digital 4-oscillator subtractive polyphonic synthesizer with stereo fx.

LV2 URI: http://synthv1.sourceforge.net/lv2

website:
http://synthv1.sourceforge.net

downloads:
http://sourceforge.net/projects/synthv1/files

git repos:
http://git.code.sf.net/p/synthv1/code
https://github.com/rncbc/synthv1.git
https://gitlab.com/rncbc/synthv1.git
https://bitbucket.org/rncbc/synthv1.git

Flattr this

samplv1 - an old-school polyphonic sampler

samplv1 0.7.5 (tenth official beta) is out!

samplv1 is an old-school polyphonic sampler synthesizer with stereo fx.

LV2 URI: http://samplv1.sourceforge.net/lv2

website:
http://samplv1.sourceforge.net

downloads:
http://sourceforge.net/projects/samplv1/files

git repos:
http://git.code.sf.net/p/samplv1/code
https://github.com/rncbc/samplv1.git
https://gitlab.com/rncbc/samplv1.git
https://bitbucket.org/rncbc/samplv1.git

Flattr this

drumkv1 - an old-school drum-kit sampler

drumkv1 0.7.5 (tenth official beta) is out!

drumkv1 is an old-school drum-kit sampler synthesizer with stereo fx.

LV2 URI: http://drumkv1.sourceforge.net/lv2

website:
http://drumkv1.sourceforge.net

downloads:
http://sourceforge.net/projects/drumkv1/files

git repos:
http://git.code.sf.net/p/drumkv1/code
https://github.com/rncbc/drumkv1.git
https://gitlab.com/rncbc/drumkv1.git
https://bitbucket.org/rncbc/drumkv1.git

Flattr this

Enjoy && have lots of fun ;)

by rncbc at June 16, 2016 05:30 PM

June 15, 2016

Libre Music Production - Articles, Tutorials and News

LMP Asks #18: Andrew Lambert & Neil Cosgrove

LMP Asks #18: Andrew Lambert & Neil Cosgrove

This month we interviewed Andrew Lambert and Neil Cosgrove, members of Lorenz Attraction and developers of LNX_Studio, a cross platform, customizable, networked DAW written in the SuperCollider programming language.  Please see the end of the article for links to LNX_Studio and Lorenz Attraction's music!

by Scott Petersen at June 15, 2016 05:09 PM

June 13, 2016

digital audio hacks – Hackaday

Ball Run Gets Custom Sound Effects

Building a marble run has long been on my project list, but now I’m going to have to revise that plan. In addition to building an interesting track for the orbs to traverse, [Jack Atherton] added custom sound effects triggered by the marble.

I ran into [Jack] at Stanford University’s Center for Computer Research in Music and Acoustics booth at Maker Faire. That’s a mouthful, so they usually go with the acronym CCRMA. In addition to his project there were numerous others on display and all have a brief write-up for your enjoyment.

[Jack] calls his project Leap the Dips which is the same name as the roller coaster the track was modeled after. This is the first I’ve heard of laying out a rolling ball sculpture track by following an amusement park ride, but it makes a lot of sense since the engineering for keeping the ball rolling has already been done. After bending the heavy gauge wire [Jack] secured it in place with lead-free solder and a blowtorch.

As mentioned, the project didn’t stop there. He added four piezo elements which are monitored by an Arduino board. Each is at a particularly extreme dip in the track which makes it easy to detect the marble rolling past. The USB connection to the computer allows the Arduino to trigger a MaxMSP patch to play back the sound effects.

For the demonstration, Faire goers wear headphones while letting the balls roll, but in the video below [Jack] let me plug in directly to the headphone port on his Macbook. It’s a bit weird, since there no background sound of the Faire during this part, but it was the only way I could get a reasonable recording of the audio. I love the effect, and think it would be really fun packaging this as a standalone using the Teensy Audio library and audio adapter hardware.


Filed under: cons, digital audio hacks

by Mike Szczys at June 13, 2016 06:31 PM

Synchronize Data With Audio From A $2 MP3 Player

Many of the hacks featured here are complex feats of ingenuity that you might expect to have emerged from a space-age laboratory rather than a hacker’s bench. Impressive stuff, but on the other side of the coin the essence of a good hack is often just a simple and elegant way of solving a technical problem using clever lateral thinking.

Take this project from [drtune], he needed to synchronize some lighting to an audio stream from an MP3 player and wanted to store his lighting control on the same SD card as his MP3 file. Sadly his serial-controlled MP3 player module would only play audio data from the card and he couldn’t read a data file from it, so there seemed to be no easy way forward.

His solution was simple: realizing that the module has a stereo DAC but a mono amplifier he encoded the data as an audio FSK stream similar to that used by modems back in the day, and applied it to one channel of his stereo MP3 file. He could then play the music from his first channel and digitize the FSK data on the other before applying it to a software modem to retrieve its information.

There was a small snag though, the MP3 player summed both channels before supplying audio to its amplifier. Not a huge problem to overcome, a bit of detective work in the device datasheet allowed him to identify the resistor network doing the mixing and he removed the component for the data channel.

He’s posted full details of the system in the video below the break, complete with waveforms and gratuitous playback of audio FSK data.

This isn’t the first time we’ve featured audio FSK data here at Hackaday. We’ve covered its use to retrieve ROMs from 8-bit computers, seen it appearing as part of TV news helicopter coverage, and even seen an NSA Cray supercomputer used to decode it when used as a Star Trek sound effect.


Filed under: digital audio hacks

by Jenny List at June 13, 2016 03:31 PM

Hackaday Prize Entry: 8-Bit Arduino Audio for Squares

A stock Arduino isn’t really known for its hi-fi audio generating abilities. For “serious” audio like sample playback, people usually add a shield with hardware to do the heavy lifting. Short of that, many projects limit themselves to constant-volume square waves, which is musically uninspiring, but it’s easy.

[Connor]’s volume-control scheme for the Arduino bridges the gap. He starts off with the tone library that makes those boring square waves, and adds dynamic volume control. The difference is easy to hear: in nature almost no sounds start and end instantaneously. Hit a gong and it rings, all the while getting quieter. That’s what [Connor]’s code lets you do with your Arduino and very little extra work on your part.

The code that accompanies the demo video (which is embedded below) is a good place to start playing around. The Gameboy/Mario sound, for instance, is as simple as playing two tones, and making the second one fade out. Nonetheless, it sounds great.

Behind the scenes, it uses Timer 0 at maximum speed to create the “analog” values (via PWM and the analogWrite() command) and Timer 1 to create the audio-rate square waves. That’s it, really, but that’s enough. A lot of beloved classic arcade games didn’t do much more.

While you can do significantly fancier things (like sample playback) with the same hardware, the volume-envelope-square-wave approach is easy to write code for. And if all you want is some simple, robotic-sounding sound effects for your robot, we really like this approach.

The HackadayPrize2016 is Sponsored by:

Filed under: Arduino Hacks, digital audio hacks, The Hackaday Prize

by Elliot Williams at June 13, 2016 05:01 AM

June 10, 2016

open-source – CDM Create Digital Music

Music thing’s Turing Machine gets a free Blocks version

We already saw some new reasons this week to check out Reaktor 6 and Blocks, the software modular environment. Here’s just one Blocks module that might get you hooked – and it’s free.

“Music thinking Machines,” out of Berlin, have built a software rendition of Music Thing’s awesome Turing Machine Eurorack module (created by Tom Whitwell). As that hardware is open source, and because what you can do in wiring you can also do in software, it was possible to build software creations from the Eurorack schematics.

The beauty of this is, you get the Turing Machine module in a form that lets you instantly control other Reaktor creations – as well as the ability to instantiate as many modules as you want without the aid of a screwdriver or waiting for a DHL delivery to arrive. (Hey, software has some advantages.) I don’t so much see it reducing the appeal of the hardware, either, as it makes me covet the hardware version every time I open up the Reaktor ensemble.

And the module is terrific. In addition to the Turing Machine Mk 2, you get the two Mk 2 expanders, Volts and Pulses.

The Turing Machine Mk 2 is a random looping sequencer – an idea generator that uses shift registers to make melodies and rhythms you can use with other modules. It’s also a fun build. But now, you can use that with the convenience of Reaktor.

Pulses and Voltages expanders add still more unpredictability. Pulses is a random looping clock divider, and Voltages is a random looping step sequencer. I also like the unique front panels made just for the Reaktor version … I wonder if someone will translate that into actual hardware.

The idea is to connect them together: take the 8 P outputs from the Turing Machine and connect them to the 8 P inputs on Pulses (for pulses), and then do the same with the voltage inputs and outputs on Volts. You can also make use, as the example ensemble does, of a Clock and Clock Divider module included by default in Reaktor 6’s Blocks collection.

With controls for probability and sequence length, you can put it all together and have great fun with rhythms and tunes.

Download the Reaktor ensemble:

Turing Machine Mk2 plus Pulses and Volts Expanders [Reaktor User Library]

Here’s what the original modules look like in action:

Find out more:

https://github.com/TomWhitwell/TuringMachine/

Also worth a read (especially now with this latest example of what open source hardware can mean – call it free advertising in software form, not to mention a cool project):
Why open source hardware works for Music Thing Modular

Oh, and if you want to go the opposite direction, Tom also recently wrote a tutorial on writing firmware for the Mutable Clouds module. The old software/hardware line is more blurred than ever, as make software versions of hardware that then interfaces with hardware and back to hardware again and hardware also runs software. (Whew.)

Turing Machine Controls
Prob: Determines the probability of a bit being swapped from 0 to 1 (or viceversa).
All right locks the sequence of bits, all left locks the sequence in a “mobius loop” mode.
Length: Sets the length of the sequence Scale: Scales the range of the pitch output +/-: Writes a 1 or a 0 bit in the shift register AB: Modulation inputs

Pulses Expander Controls
Output: Selects 1 of the 11 gated outputs

Volts Expander Controls
1 till 5: Controls the voltage of active bit

For more detailed information of how the turing machine works please visit the Music Thing
website: https://github.com/TomWhitwell/TuringMachine/

Music Thinking Machines
Berlin

The post Music thing’s Turing Machine gets a free Blocks version appeared first on CDM Create Digital Music.

by Peter Kirn at June 10, 2016 04:37 PM

Libre Music Production - Articles, Tutorials and News

John Option release debut album, "The cult of John Option"

John Option release debut album,

John Option have just released "The cult of John Option". This is their debut album and it brings together all their singles published in the past few months, including remix versions.

As always, John Option's music is published under the terms of the Creative Commons License (CC-BY-SA) and is produced entirely using free software.

by Conor at June 10, 2016 01:30 PM

June 09, 2016

GStreamer News

GStreamer Core, Plugins, RTSP Server, Editing Services, Python, Validate, VAAPI 1.8.2 stable release

The GStreamer team is pleased to announce the second bugfix release in the stable 1.8 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.8.1. For a full list of bugfixes see Bugzilla.

See /releases/1.8/ for the full release notes.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, or gstreamer-vaapi, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, or gstreamer-vaapi.

June 09, 2016 10:00 AM

June 06, 2016

open-source – CDM Create Digital Music

Ableton hacks: Push 2 video output and more

For years, the criticism of laptops has been about their displays – blue light on your face and that sense that a performer is checking email. But what if the problem isn’t the display, but the location of the display? Because being able to output video to your hardware, while you turn knobs and hit pads, could prove pretty darned useful.

Push 2 video output

And so that makes this latest hack really cool. 60 fps(!) video can now stream over a USB cable to Ableton’s Push 2 hardware. You’ll need some way of creating that video texture, but that’s there in Max for Live’s Jitter objects.

David Butler’s imp.push object, out last week, makes short work of this.

The ingredients that made this possible:
1. Ableton’s API documentation for Push 2, available now on GitHub thanks to Ableton and a lot of hard work by Ralf Suckow.

2. libusb

Learn more at this blog post:
imp.push Beta Released

Get the latest version (or collaborate) at GitHub

Next up on his to-do list – what to do with those RGB pads.

Here’s an impressive video from Cycling ’74 — ask.audio scooped us on this story last week, hat tip to them.

Thanks to Bjorn Vayner for the tip!

ubermap

Push 2 mappings

And while you’re finding cool stuff to do to expand your Push 2 capabilities, don’t miss this free set of scripts.

Ubermap is a free and open source script for Push 2 designed to let you map VST and AU plug-ins to your Push controller. What’s great about this is that there’s no middle man – nothing like Komplete Kontrol running between you and your plug-in, just direct mapping of parameters. It’s not as powerful or extensive as the Isotonik tool we covered last week, and it’s limited to Push 2 (with some Push 1 support), so you’ll still want to go that route if you fancy using other controller hardware. But the two can be viewed as complementary, particularly as all of this is possible because of Ableton’s API documentation.

You can find the scripts on the Ableton forum:

Ubermap for Push 2 (VST/AU parameter remapping)

There are links there to more documentation and tips on configuration of various plug-ins. Or to grab everything directly, head to GitHub:

http://bit.ly/ubermap-src

Now, let’s hope this paves the way for more native support in future releases of Live, and some sort of interface for doing this in the software without custom scripts. But there’s no reason to wait – these solutions do work now.

Previously:

Ableton just released every last detail of how Push 2 works

You can now access the Push 2 display from Max

Ableton hacks: map anything, even Kontakt and Reaktor

The post Ableton hacks: Push 2 video output and more appeared first on CDM Create Digital Music.

by Peter Kirn at June 06, 2016 03:28 PM

June 03, 2016

blog4

Embedded Artist Berlin concert 3.6.2016

After the great concert last week in Linz during the Amro festival at Stadtwerkstatt, we play as Embedded Artist tonight in Berlin at Ausland:
http://ausland-berlin.de/embedded-artist-antez-morimoto

by herrsteiner (noreply@blogger.com) at June 03, 2016 12:50 AM

June 01, 2016

Libre Music Production - Articles, Tutorials and News

LMP Asks #17: An interview with Frank Piesik

LMP Asks #17: An interview with Frank Piesik

This month we talked with Frank Piesik, a musician, inventor and educator living in Bremen.

Hi Frank, thanks for talking with us! First, can you tell us a little about yourself?

by Scott Petersen at June 01, 2016 02:05 PM

Contest: Win an amazing MOD Duo!

Contest: Win an amazing MOD Duo!

To commemorate the last batch shipment to kickstarter backers MOD Devices have set up a social media contest to give away a MOD Duo, the hardware stompbox which runs on linux and a whole ecosystem of FLOSS audio plugins.

by Conor at June 01, 2016 10:46 AM

May 31, 2016

ardour

Nightly builds are now for TESTING only

The master development branch of Ardour has recently been merged with two major development branches. These bring major new functionality to Ardour (tempo ramps and VCA masters, among other things), but the result is a new version of Ardour. This version is sufficiently different that it could alter/damage your Ardour configuration files and may not correctly work with existing sessions. We have therefore tagged it "5.0-pre0" so that it will create new configuration folders and not interact with your settings and preferences for older versions of Ardour.

read more

by paul at May 31, 2016 08:40 PM

Linux – CDM Create Digital Music

iZotope Mobius and the crazy fun of Shepard Tones

I always figure the measure of a good plug-in is, you want to tell everyone about it, but you don’t want to tell everyone about it, because then they’ll know about it. iZotope’s Möbius is in that category for me – it’s essentially moving filter effect. And it’s delicious, delicious candy.

iZotope have been on a bit of a tear lately. The company might be best known for mastering and restoration tools, but in 2016, they’ve had a series of stuff you might build new production ideas around. And I keep going to their folder in my sets. There’s the dynamic delay they built – an effect so good that you’ll overlook the fact that the UI is inexplicably washed out. (I just described it to a friend as looking like your license expired and the plug-in was disabled or something. And yet… I think there’s an instance of it on half the stuff I’ve made since I downloaded it.)

More recently, there was also a a plug-in chock full of classic vocal effects.

iZotope Möbius brings an effect largely used in experimental sound design into prime time.

At its core is a perceptual trick called the “Shepard Tone” (named for a guy named Shepard). Like the visual illusion of stripes on a rotating barber pole, the sonic illusion of the Shepard Tone (or the continuously-gliding Shepard–Risset glissando) is such that you perceive endlessly rising motion.

Here, what you should do for your coworkers / family members / whatever is definitely to turn this on and let them listen to it for ten hours. They’ll thank you later, I’m sure.

The Shepard Tone describes synthesis – just producing the sound. The Möbius Filter applies the technique to a resonant filter, so you can process any existing signal.

Musical marketing logic is such that of course you’re then obligated to tell people they’ll want to use this effect for everything, all the time. EDM! Guitars! Vocals! Whether you play the flugelhorn or are the director of a Bulgarian throat signing ensemble, Möbius Filter adds the motion and excitement every track needs!

And, uh, sorry iZotope, but as a result I find the sound samples on the main page kind of unlistenable. Of course, taste is unpredictable, so have a listen. (I guess actually this isn’t a bad example of a riser for EDM so much as me hating those kinds of risers. But then, I like that ten hours of glissandi above, so you probably shouldn’t listen to me.)

https://www.izotope.com/en/products/create-and-design/mobius-filter/sounds.html

Anyway, I love the sound on percussion. Here’s me messing around with that, demonstrating the ability to change direction, resonance, and speed, with stereo spatialization turned on:

The ability to add sync effects (and hocketing, with triplet or dotted rhythms) for me is especially endearing. And while you’ll tire quickly of extreme effects, you can certainly make Möbius Filter rather subtle, by adjusting the filter and mix level.

Möbius Filter is US$49 for most every Mac and Windows plug-in format. A trial version is available.

screenshot_438

https://www.izotope.com/en/products/create-and-design/mobius-filter.html

It’s worth learning more about the Shepard and Risset techniques in general, though – get ready for a very nice rabbit hole to climb down. Surprisingly, the Wikipedia article is a terrific resource:

Shepard tone

If you want to try coding your own shepard tone synthesis, you can do so in the free and open source, multi-platform environment SuperCollider. In fact, SuperCollider is what powered the dizzying musical performance by Marcus Schmickler CDM co-hosted with CTM Festival last month here in Berlin. Here’s a video tutorial that will guide you through the process (though there are lots of ways to accomplish this).

The technique doesn’t stop in synthesis, though. Just as the same basic perceptual trick can be applied to rising visuals and rising sounds, it can also be used in rhythm and tempo – which sounds every bit as crazy as you imagine. Here’s a description of that, with yet more SuperCollider code and a sound example using breaks. Wow.

Risset rhythm – eternal accelerando

Finally, the 1969 rendition of this technique by composer James Tenney is absolutely stunning. I don’t know how Ann felt about this, but it’s titled “For Ann.” (“JAMES! MY EARS!” Okay, maybe not; maybe Ann was into this stuff. It was 1969, after all.) Thanks to Jos Smolders for the tip.

Good times.

So, between Möbius Filter and SuperCollider, you can pretty much annoy anyone. I’m game.

https://supercollider.github.io

The post iZotope Mobius and the crazy fun of Shepard Tones appeared first on CDM Create Digital Music.

by Peter Kirn at May 31, 2016 07:11 PM

Scores of Beauty

Music Encoding Conference 2016 (Part 1)

About a year a go I posted a report of my first appearance at the Music Encoding Conference that had taken place in Florence (Italy). I then introduced the idea of interfacing LilyPond with MEI, the de facto standard in (academic) digital music edition and was very grateful to be welcomed warmly by that scholarly community. Over the past year this idea became increasingly concrete, and so I’m glad that German research funds made it possible to present another paper at this year’s conference although Montréal (Canada) isn’t exactly around the corner. In a set of two posts I will talk about my my impressions in general (current post) and my paper and other LilyPond-related aspects (next post).

The MEI (which stands for Music Encoding Initiative, which is both a community and as a format specification) is a quite small and friendly community, although it basically represents the Digital Humanities branch of musicology as a whole. As a consequence it’s nice to see many people again on this yearly convention. There were 67 registered participants from 10 countries with rather strong focus on north America and central Europe (last year in Florence I think we were around 80).

The MEC is a four day event with day two and three being dedicated to actual paper presentations. The first day features workshops while the fourth day is an “unconference day” giving the opportunity for spontaneous or pre-arranged discussion and collaboration. A sub-event that seems to gain relevance each year is the conference banquet – one could even imagine that by now this plays a role when applying for organizing the next MECs 😉 . We had a nice dinner at the Auberge Saint Gabriel with excellent food and wine and an extremely high noise floor that I attribute to the good mood and spirit we all had. And on the last evening we had the chance to attend a lecture recital with Karen Desmond and the VivaVoce ensemble who gave us a commented overview of the history of notation from around 900 to the late 16th century.

Ensemble VivaVoce and Karen Desmond (click to enlarge)

Ensemble VivaVoce and Karen Desmond (click to enlarge)

Verovio Workshop

From the workshops I decided to attend Verovio – current status and future directions, which was partly a presentation of the tool itself and its latest development, but also a short hands-on introductory tutorial (OK, “hands-on” was limited to having the files available to look through and modify the configuration variables). Verovio is currently “the” tool of choice for displaying scores in digital music editions, so it’s obvious that I’m highly interested in learning more about it. Basically it is a library that renders MEI data to scores in SVG files, with a special feature being that the DOM structure of the SVG file matches that of the original MEI, which makes it easy to establish two-way links between source and rendering. Verovio is written in C++ and compiled to a number of target environments/languages. The most prominent one is JavaScript through which Verovio provides real-time engraving in the browser. You should consider having a look at the MEI Viewer demonstration page.

Screenshot from the Verovio website, showing the relation of rendering and source structure (click to enlarge)

Screenshot from the Verovio website, showing the relation of rendering and source structure (click to enlarge)

Verovio’s primary focus is on speed and flexibility, and what can I say? it’s amazing! Once the library and the document have been downloaded the score is rendered and modified near-instantly with a user experience matching ordinary web browsing. It is possible to resize and navigate a score in real-time while with instant reflow. Score items can easily be accessed through JavaScript and may be used to write back any actions to the original source file. And as we’re in the XML domain throughout you can do cool things like rendering remotely hosted scores or extracting parts through XSL transformations and queries. A rather new feature is MIDI playback with highlighting of the played notes. The MIDI player is linked quite tightly into the document, so you can use the scrollbar or click on notes to jump playback with everything being robustly in sync.

Of course this performance comes at a cost: as Verovio is tuned to speed and flexibility its engraving engine is of course rather simplistic. And apart from the fact that it doesn’t support everything yet that a notation program would need it will probably never compete with LilyPond in terms of engraving quality. On the other hand LilyPond will probably never compete with Verovio on it’s native qualities speed and flexibility. This boils down to Verovio and LilyPond rather being perfect complements than competitors. They should be able to happily coexist side by side – within the same editing project or even editing environment. But I’ll get back to that in the other post.

Paper Presentations

Days two and three were filled with paper presentations and posters, and I can hardly give a comprehensive account of everything. Instead I have to pick a few things and make some remarks from a somewhat LilyPond-ish perspective.

Our nice conference hall (presentation by Reiner Krämer).

Our nice conference hall. “Cope events” are somewhat like MIDI wrapped in LISP (click to view full image)

Metadata and Linked Data

Generally speaking the MEI has two independent objectives: music editing and metadata. The original inventor of MEI, Perry Roland, is actually a librarian, and so documenting everything about sources is an inherent goal in the MEI world. Typical projects in that domain might be the cataloguing of a historic library such as the Sources of the Detmold Court Theatre Collection (German only).

But encoding the phyisical sources alone isn’t as good as it gets without considering the power of linking data. There are numerous items in such a house that may refer to each other and provide additional information: bills, copyist’s marks, evening programmes, comments and modifications to individual copies of the music, and much more. Making this kind of information retrievable, possibly across projects, promises new areas of research.

Encoding enhanced data specifying concrete performances of a work is another related area of research. Starting from focusing on secondary information like inscriptions in the performance material existing approaches go all the way to designing systems to encode timing, articulation and dynamics from recorded music as was presented by Axel Berndt. Still far away from analyzing their data directly from the recording it seems a very promising project to provide a solid data foundation to investigate parameters of “musical” performance like for example determining a “rubato fingerprint” for a given pianist. Of course this also works in the other direction, and we heard a MIDI rendering of a string quartet featuring astonishing livelyhood. I’d be particularly interested to see if that technology couldn’t be built upon for notation editors’ playback engines.

Extending the Scope of MEI

An ubiquitous topic on the side actual of music encoding is how to deal with specific repertoire that isn’t covered by Common Western Music Notation. As MEI is so flexible and open it is always possible to create project-specific customizations to include the notation repertoire at hand. But the freedom also implies the risk of becoming too widely split to an amount where it might become meaningless. This is why it is so important to regularly discuss these things in the wider MEI community.

The top targets in this area seem to be neumes and lute (and other) tablature systems, while I didn’t see any attempts towards encoding contemporary or non-western notation styles so far.

Edition Projects

Of course there also were presentations of actual edition projects, of which I’ll mention just a few.

Neuma is a digital library of music scores encoded in MEI (and partially still MusicXML). It features searching by phrases, and the scores can be referenced to be rendered anywhere with Verovio (as described above). They have also been working with LilyPond and would be happy to have this as an additional option for presenting higher quality renderings of their scores and incipits.

Johannes Kepper gave an insightful and also amusing presentation about the walls they ran into with their digital Freischütz edition. This project actually pushed the limits of digital music edition pretty hard and can be used as a reference of approaches and limitations equally. Just imagine that their raw data is about 230 MB worth of XML files – out of which approximately 100 MB count for the encoding of the autograph manuscript alone …

A poster was dedicated to the “genetic edition” of Beethoven’s sketches. This project sets out to encode the genetic process that can be retraced in the manuscript sources giving access to each step of Beethoven’s working process individually.

Salsah is a project at the Digital Humanities Lab at the Basel university. They work on an online presentation of parts of the Anton Webern Gesamtausgabe, namely the sketches (while the “regular” works are intended to be published as a traditional print-only edition). The project is still in the prototype stage, but it has to be said that it is fighting somewhat desparately with its data. The Webern edition is realized using Finale – and the exported MusicXML isn’t exactly suited to semantically make sense of … Well, they would have had the solution at their fingertips, but two and a half years ago I wasn’t able to convince them to switch to LilyPond before publishing the first printed volumes 😉


After these more general observations a second post will go into more detail about LilyPond specific topics, namely MEI’s lack of a professional engraving solution, my own presentation, and nCoda, a new editing system that was presented for the first time at the MEC (incidentally just two days after the flashy and heavily pushed Dorico announcement). I have been in touch with the nCoda developers for over a year now, and it was very nice and fruitful to have a week together in person – but that’s for the next post …

by Urs Liska at May 31, 2016 06:44 AM

May 28, 2016

A touch of music

Modeling rhythms using numbers - part 2

This is a continuation of my previous post on modeling rhythms using numbers.

Euclidean rhythms

The Euclidean Rhythm in music was discovered by Godfried Toussaint in 2004 and is described in a 2005 paper "The Euclidean Algorithm Generates Traditional Musical Rhythms". The greatest common divisor of two numbers is used rhythmically giving the number of beats and silences, generating the majority of important World Music rhythms.

Do it yourself

You can play with a slightly generalized version of euclidean rhythms in your browser  using a p5js based sketch I made to test my understanding of the algorithms involved. If it doesn't work in your preferred browser, retry with google chrome.  

The code

The code may still evolve in the future. There are some possibilities not explored yet (e.g. using ternary number systems instead of binary to drive 3 sounds per circle). You can download the full code for the p5js sketch on github

screenshot of the p5js sketch running. click the image to enlarge

The theory

So what does it do and how does it work? Each wheel contains a number of smaller circles. Each small circle represents a beat. With the length slider you decide how many beats are present on a wheel.  

Some beats are colored dark gray (these can be seen as strong beats), whereas other beats are colored white (weak beats). To strong and weak beats one can assign a different instrument. The target pattern length decides how many weak beats exist between the strong beats. Of course it's not always possible to honor this request: in a cycle with a length of 5 beats and a target pattern length of 3 beats (left wheel in the screenshot) we will have a phrase of 3 beats that conforms to the target pattern length, and a phrase consisting of the 2 remaining beats that make a "best effort" to comply to the target pattern length. 

Technically this is accomplished by running Euclid's algorithm. This algorithm is normally used to calculate the greatest common divisor between two numbers, but here we are mostly interesting in the intermediate results of the algorithm. In Euclid's algorithm, to calculate the greatest common divisor between an integer m and a smaller integer n, the smaller number n is repeatedly subtracted from the greater until the greater is zero or becomes smaller than the smaller, in which case it is called the remainder. This remainder is then repeatedly subtracted from the smaller number to obtain a new remainder. This process is continued until the remainder is zero. When that happens, the corresponding smaller number is the greatest common divisor between the original two numbers n and m.

Let's try it out on the situation of the left wheel in the screenshot. The greater number m is 5 (length) and the smaller number n is 3 (target pattern length). Now the recipe says to repeatedly subtract 3 from 5 until you get something smaller than 3. We can do this exactly once:

5 - (1).3 = 2

We can rewrite this as:

5 = (1).3 + 2

This we can interpret as: the cycle of 5 beats is to be decomposed as 1 phrase with 3 beats, followed by a phrase with 2 beats (the remainder). Each phrase consists of a single strong beat followed by all weak beats. In a symbolic representation easier read by musicians one might write: x..x. (In the notation of the previous part of this article one could also write 10010).

Euclid's algorithm doesn't stop here. Now we have to repeatedly subtract the remainder 2 from the smaller number 3:

3 = (1).2 + 1

This in turn can be read as: the phrase of 3 beats can be further decomposed as 1 phrase of 2 beats followed by a phrase consisting of 1 beat. In a symbolic representation: x.x Euclid continues:

2 = (2).1 + 0

The phrase of two beats can be represented symbolically as: xx. We've reached remainder 0 and Euclid stops: apparently the greatest common divisor between 5 and 3 is 1.

Now it's time to realize what we really did: 
  • We decomposed a phrase of 5 beats in a phrase of 3 beats and a phrase of 2 beats making a rhythm x..x. 
  • Then we further decomposed the phrase of 3 beats into a phrase of 2 beats followed by a phrase of 1 beat. 
  • We can substitute this refined 3 beat phrase in our original rhythm of 5 = 3+2 beats to get a rhythm consisting of 5 = (2 + 1) + 2 beats: x.xx. 
  • I hope it's clear by now that by choosing how long to continue using Euclid's algorithm, we can decide how fine-grained we want our rhythms to become. 
  • This is where the max pattern length slider comes into play. 
The length slider and the target pattern slider will determine a rough division between strong and weak beats by running Euclid's algorithm just once, whereas the max pattern length slider helps you decide how long to carry on Euclid's algorithm to further refine the generated rhythm.


by Stefaan Himpe (noreply@blogger.com) at May 28, 2016 02:22 PM

May 24, 2016

digital audio hacks – Hackaday

Secret Listening to Elevator Music

While we don’t think this qualifies as a “fail”, it’s certainly not a triumph. But that’s what happens when you notice something funny and start to investigate: if you’re lucky, it ends with “Eureka!”, but most of the time it’s just “oh”. Still, it’s good to record the “ohs”.

Gökberk [gkbrk] Yaltıraklı was staying in a hotel long enough that he got bored and started snooping around the network, like you do. Breaking out Wireshark, he noticed a lot of UDP traffic on a nonstandard port, so he thought he’d have a look.

A couple of quick Python scripts later, he had downloaded a number of the sample packets and decoded them into hex and found the signature for LAME, an MP3 encoder. He played around with byte offsets until he got a valid MP3 file out, and voilà, the fantastic reveal! It was the hotel’s elevator music stream — that he could hear outside in the corridor with much less effort. (Sad trombone.)

But just because nothing came up this time doesn’t mean that nothing will come up next time. And it’s important to keep your skills sharp for when you really need them. We love following along with peoples’ reverse engineering efforts, whether or not they end up finding anything. What oddball signals have you found lately?

Thanks [leonardo] for the tip! Wireshark graphic from Softpedia’s entry on Wireshark. Simulated-phosphor audio display by Oona [windytan] Räisänen (check that out!).


Filed under: digital audio hacks, security hacks, slider

by Elliot Williams at May 24, 2016 08:01 AM

May 22, 2016

aubio

Install aubio with pip

You can now install aubio's python module using pip:

$ pip install git+git://git.aubio.org/git/aubio

This should work for Python 2.x and Python 3.x, on Linux, Mac, and Windows. Pypy support is on its way.

May 22, 2016 01:00 PM

May 17, 2016

OSM podcast

May 14, 2016

Libre Music Production - Articles, Tutorials and News

EMAP - a GUI for Fluidsynth

EMAP - a GUI for Fluidsynth

EMAP (Easy Midi Audio Production) is a graphical user interface for the Fluidsynth soundfont synthesizer. It functions as a Jack compatible:

by admin at May 14, 2016 04:12 PM

May 11, 2016

Pid Eins

CfP is now open

The systemd.conf 2016 Call for Participation is Now Open!

We’d like to invite presentation and workshop proposals for systemd.conf 2016!

The conference will consist of three parts:

  • One day of workshops, consisting of in-depth (2-3hr) training and learning-by-doing sessions (Sept. 28th)
  • Two days of regular talks (Sept. 29th-30th)
  • One day of hackfest (Oct. 1st)

We are now accepting submissions for the first three days: proposals for workshops, training sessions and regular talks. In particular, we are looking for sessions including, but not limited to, the following topics:

  • Use Cases: systemd in today’s and tomorrow’s devices and applications
  • systemd and containers, in the cloud and on servers
  • systemd in distributions
  • Embedded systemd and in IoT
  • systemd on the desktop
  • Networking with systemd
  • … and everything else related to systemd

Please submit your proposals by August 1st, 2016. Notification of acceptance will be sent out 1-2 weeks later.

If submitting a workshop proposal please contact the organizers for more details.

To submit a talk, please visit our CfP submission page.

For further information on systemd.conf 2016, please visit our conference web site.

by Lennart Poettering at May 11, 2016 10:00 PM

May 10, 2016

Linux – CDM Create Digital Music

Trigger effects in Bitwig with MIDI, for free

In the latest chapter of “people on the Internet doing cool things for electronic music,” here’s a creation by Polarity. It lets you rapidly trigger effects parameters via MIDI. And if you’re a Bitwig Studio enthusiast, it’s available for free.

Clever stuff. YouTube has the download link and instructions.

Polarity, based in Berlin, describes himself thusly:

Hi i´m Polarity and do music at home in my small bedroom studio. I record regularly sessions and publish them here. I also broadcast live on twitch from time to time.

Hallo ich heiße Polarity und mache Musik hier in Berlin in meinem kleinen Schlafzimmer. Ich zeichne regelmässig Sessions auf und veröffentliche sie hier. Wer möchte kann das auch live auf Twitch verfolgen, wo ich öfters Live sende!

(Ah, I was wondering when I’d run into someone using Twitch – the live streaming service used largely by gamers – for music.)

More:
Twitch.tv: http://www.twitch.tv/polarity_berlin
Soundcloud: https://soundcloud.com/polarity

It’s an interesting form of promotion – give musicians something they can use. And if that’s where music is headed, maybe that’s not a bad thing. It means the means of making music will spread along with musical ideas, which in the connected online village now worldwide seems a positive.

The post Trigger effects in Bitwig with MIDI, for free appeared first on CDM Create Digital Music.

by Peter Kirn at May 10, 2016 09:29 PM

May 06, 2016

KXStudio News

Changes in KXStudio repositories

Hey everyone, just a small heads up about the KXStudio repositories.

If you use Debian Testing or the new Ubuntu 16.04 you probably saw some warnings regarding weak SHA1 keys when checking for updates.
We're aware of this issue and a fix is coming soon, but it will require some changes in the repositories.

First, we'll get rid of the 'lucid' builds and rebuild all of them in the 'trusty' series.
For those of you that were using Debian 6 or something older than Ubuntu 14.04, the repositories will stop working for you later this month.

Second, the gcc5 specific packages will be migrated from 'wily' series to 'xenial'.
This means you'll no longer be able to use the KXStudio repositories if you're running Ubuntu 15.10.
If that's the case for you, please update to 16.04 as soon as possible. Note that 15.10 will be officially end-of-life in 2 months.

And finally, the gcc5 packages will begin using Qt5 instead of Qt4 for some applications.
This will include Carla, Qtractor and the v1 series plugins.
Hopefully this won't break anything, but if it does please let us know.

That's it for now. Have a nice weekend!

by falkTX at May 06, 2016 10:00 AM

May 05, 2016

News – Ubuntu Studio

Help Us Put Some Polish on Ubuntu Studio

We are proud to have Ubuntu Studio 16.04 out in the wild. And the next release can and should be better. It WILL be better if you help! Are there specific packages that should be included or removed? Are there features you would like to see? We cannot promise to do everything you ask, but […]

by Set Hallstrom at May 05, 2016 09:58 AM

fundamental code

Lad Discussion Peaks

A History of LAD As Seen Through Heated Discussion

Warning
Summarizing years of discussions is a difficult task. I do not intend to distort the meaning of quotes and if you have a particular quote which you feel is being misrepresented please let me know. This article is designed to review the community as a whole, not impose my opinions onto it. Posts reflect the sentiment of the user at the time of posting and may very likely not reflect the current state of projects or even the authors

What is LAD?

To get a bigger picture of what exactly has lead up to the current state of affairs within LAD I decided it was a good idea to read through some historic [LAD] discussions which made up some of those peaks in activity. This is somewhat more biased towards the flame wars and community rantings, but those discussions should still reveal plenty about the evolution of pain points within the community. First to frame this community analysis let’s look at how the linux audio mailing list officially defines its goal:

Our goal is to encourage widespread code re-use and cooperation, and to provide a common forum for all audio related software projects and an exchange point for a number of other special-interest mailing lists.

This simply shows that the mailing list should be a cooperative place and somewhere where information should be exchanged. Some medium like this is a pretty darn valuable resource and it was recognized as such early on.

The problem is that most Linux audio apps are developed by people who have full-time jobs doing other things. The problems involved in designing audio apps are so great that even those people who are able to work full time on Linux audio are often stumped as to how to implement the desired solutions.
— Mark Knecht October 2002

With varying levels of success there have been some huge discussions about the tradeoffs for different plugin standards, session managers, licenses, knobs (boy do audio devs love talking about knobs), and a variety of other topics. Even with the advantages that something like a community mailing list offers, it’s questionable whether people really consider linux audio developers as a whole a community.

I think the linux audio world is too small and varied to have a tightly knit organisation like the Gnome guys.
— Steve Harris June 2004
If you want to organize something go ahead and organize it, but please don’t tell me that I have to conform to some consumer driven vision of the great commercial future of Linux Audio.
— Jan Depner June 2004
The notion of "the development community" is a misnomer. In fact, what we have are "development communities" (plural).
— Fred Gleason February 2013

Fundamentally, the 'community' is made out of a large variety of independent individuals who need to have a large spread of specialization in order to make effective software. This typically has manifested itself with many different projects with single developers without a great sense of cohesion. This sort of hobbiest development has produced a lot of content, though overall workflow may fall short of users expectations and many projects are subject to bitrot after the small development team moves onto other projects. Everyone has conflicting ideas on how things should work:

Everyone has their point of view. It’s not like you will tell someone "I want to add this feature to your app/api" and will say "Ok". You will simply get an answers like: -No, sorry, I wont accept that patch, i’d rather the library concentrates only on this. -Why dont you do it as a separate library? -Feel free to fork this and add it yourself. -Yeah I recognize it’s useful, but I think it’s out of place, inconsistent with the rest, that I try to keep simple.

— Juan June 2005

Before moving on to the issues presented in this community I want to take a brief detour showing how the linux-audio-dev mailing list and the linux-audio-user mailing list are linked. Within the overall community you frequently have developers who extensively use other LA tools and you have quite a few users who occasionally dabble in details generally reserved for developers. By looking at how many people fall into each one of these categories as a function of how often they post to LAD/LAU we can see that there is an overlap for casual users and a very strong overlap for heavyweight posters.

lml overall cross posters

This overall trend also exists on a much smaller scale. Within any given month there is a significant number of people who have posted on both lists.

lml monthly cross posters

These individuals tend to generate a very significant number of the total posts in any given month as well.

lml monthly cross posts

By acknowledging this relationship, a good number of the problems observed on the LAD list should correspond to issues visible to users as well. In some cases like the 'What sucks about Linux Audio' threads there have been corresponding threads on both lists. In other cases ideas simply flow from one location to another.

Initial Friction

In the past it wasn’t all that unusual for these disagreements to leak onto the mailing lists where they could grow substantially. A good example of this friction would be the Impro-Visor forking effort in 2009. In this thread a fork of an existing project had been created due to GPL licensing issues, but the way the forking was done produced disagreement within the community.

One of the main reasons why R. Stallman started GNU/FSF/GPL because of it’s social aspect. You learn kids on schools for example to corporate and help each other, being social.
— Grammostola Rosea Aug 2009
Forking a project is by it’s nature, and GPL "rights" aside, quite an impact on the author. He or she may have been sweating over their code base for some time, and i don’t think anyone could say they wouldn’t feel a bit awkward if they saw their code being forked, and developed further. Even more so for those who may not have developed their code under the assumption of GPL. From an "outsider’s" point of view, it would seem like a big decision to take both ways, if both parties have any sort of empathy.
— Alex Stone Aug 2009

The individual forking the project could be described as quite aggressive with his approach which did spawn quite the meandering discussion. This thread was one of the first threads in my reading of [LAD] which seemed to significantly put users off and it certainly didn’t help that in June a rather heated flame war on RealtimeKit had already driven away that project’s developer.

I have been following these list serves for a while, but I am just not interested in this kind of drama, and would like to mention for the record that I will no longer be following the lad or lau list serves.
— Justin Smith July 2009
In the last 18 months in LAD we’ve seen some pretty emotive flamewars about Reaper, LV2 in closed source software, LinuxSampler licensing, plugin output negotiation, JACK packaging, JACK and DBUS, PulseAudio, the way qjackctl starts up jackd, RTKit, and probably some other things I’ve forgotten. And this. This isn’t a high traffic list; the flames quite likely outnumber the rest.
— Chris Cannam July 2009
So now is the time to give your positive feedback and constructive critics. Don’t troll and don’t start another flame war unless your goal is to alienate me to stage of me detaching from this community. I will not respond to trolish and flamish mails, feel free to contact me with private mails if you prefer so.
— Nedko Arnaudov November 2009

As these discussions scale out of proportion it’s easy for them to shift from a heated dialog into a flame war. These flame wars often result in huge misunderstandings, a lot of misinformation, tons of angry emails, and importantly wasted time. Wasting time on these mailing lists is a significant offence if they want to retain users and help keep the discussions targeted and helpful to those involved.

When Flamewars Aren’t Stoked

Of course these so called flame wars are not something which is entirely bad for a community to have.

Most of the occasionally 'caustic' folk in this community …​ understand that heated arguments are just a part of how developers find the best solution, and there is no ill will involved. It’s simply a useful tool/process - and arguably, I would say, the most effective way of hammering out good software design the world has seen to date.

Unfortunately there are always a few childish fools who don’t understand this concept (or think it’s a competition and can’t handle the fact that they were wrong) and elevate silly little arguments into long term personal grudges…​ Like trolls, they are best ignored while the rest of us get on with useful things.

What we’re looking for is less completely irrelevant noise like this. Particularly in response to jokes (blatant smileys and all).

— Drobilla July 2009

When you have a heated discussion while keeping it on topic real work can be done, though it is often off-putting to bystanders and those caught in the middle.

When Flamewars Are Stoked

Generally for a lot of these flame wars to take flight there need to be a large variety of people stoking the flames and not directly contributing to the discussion in a meaningful way (though this is not always the case). In most threads this was done by a variety of users mostly ones who weren’t very frequent posters. There was one repeat offender who during July 2010 really caused quite the meltdown within the LAD mailing list, Ralf Mardorf. I originally wasn’t going to mention this, but essentially all flames and off topic communication that july could be traced back to him.

Who is Ralf Mardorf?

I never programmed anything for Linux. I’m not able to do it and I don’t have the time to learn it.

I subscribed to the list, because I needed some information when I tried to program for Linux audio. I guess you want people to learn how to program for Linux audio. What you’re looking for is an attitude test, not a test about programming knowledge. I’ve got knowledge about programming, not about programming for Linux. You don’t like my attitude, but I hope you like other people who have the attitude that you want, even if they don’t have programming knowledge. (This is another issue, but not that one OS might or might not be good, better or what ever, so I guess I should reply :p)

Btw. on user lists a user don’t get some needed information, e.g. actually about what kernel is fine with rtirq and what kernel isn’t fine with it, so it can become impossible to set up an audio Linux, another reason why I’m subscribed to this list.

I’m and other users are responsible for my/their Linux installations, we should use all available sources to get knowledge. Some, me too, do so. In addition now you expect from users that they also should have the same attitude?

— Ralf Mardorf August 2009

And what happened in July?

Well, it started off in a discussion about MIDI jitter. This is something which can be quantified and discussed in terms of the numbers quite easily. Ralf brought the issue up which could imply some interesting bugs, design flaws, or configuration issues. Some simple tests to find the issue were proposed, but the data was never returned to the list resulting in posts such as:

I know very gifted musicians who do like me and they always 'preach' that I should stop using modern computers and I don’t know much averaged people. So the listeners in my flat for sure would be able to hear even failure that I’m unable to hear.
— Ralf Mardorf July 2010

There is no objective valid timing fluctuation. The musical savant next door might be much more sensitive than I’m, regarding to the groove, I don’t know …​ I guess there doesn’t live a musical savant next door, perhaps I’m this savant ;).

Anyway, forget about my assumptions about ms of jitter. I’m fine with the C64, Atari ST and all those stand alone sequencers from the 80ies. I tested did it, but I’m sure I’ll be able to hear hear the difference to my Linux computer …​ not when listening to all MIDI instruments played alone at the same time, but when listening to MIDI instruments + audio tracks.

— Ralf Mardorf July 2010
Sorry for this PS, I try to learn not to write such a high amount of mails :(, but it could be important.
— Ralf Mardorf July 2010

Of course this was pretty frustrating to a number of developers who wanted to solve the problem at hand.

You are comparing a banana and an orange to find out which one is sweeter. Given the nature of the problem it would help a lot to have as little differences between the systems under test, otherwise it’s impossible to track it down.
— Robin Gareus July 2010
We’re getting seriously off-topic here. After all, this is developer list. What happened to the ALSA MIDI Jitter measurements and test-samples?
— Robin Gareus July 2010

This was followed up by numerous off topic threads. Ralf Mardorf ended up accounting for 44 of 463 posts in June and 165 of 653 messages in July. There are frequent replies to himself and if you look at the timestamps from that month there’s even a period where 7 emails are fired off to the list with no responses from anyone else among them. I’m honestly not sure if this is intentional trolling or not, but when a thread named "STEREO RULES" in all caps is created in the midst of the chaos you have to at least suspect it.

The sort of replies which can be seen in this month highlight some of the major issues at play. Developers generally want to know that their software works and that people can use it. They also crucially have very limited time considering that this work is typically done in addition to their other obligations without any return other than the enjoyment of it.

General Thoughts

So, up to this point in history flamewars have been a problem and they have been fueled by a number of individuals who intentionally or otherwise don’t contribute substantially to the original aim of the discussion. Both users and developers for linux audio software seem frustrated with this as it makes it difficult to obtain information, convey accurate information, and interact with other members of the community without wading through a lot of noise. Some of these issues are mirrored in more recent 'heated discussions', but this writeup is long enough, so that will have to wait for a part two.

May 05, 2016 04:00 AM

May 03, 2016

Libre Music Production - Articles, Tutorials and News

Guitarix 0.35 released including much anticipated interface redesign

Guitarix 0.35 released including much anticipated interface redesign

Guitarix has recently seen a new release, version 0.35. As always there are new plugins and bug fixes but the big news with this release is the overhauled interface, compliments of Markus Schmidt. Markus is also responsible for CALF studio gears plugin design, as well as the DSP of many of it's plugins.  

by Conor at May 03, 2016 07:10 PM

April 27, 2016

rncbc.org

Qtractor 0.7.7 - The Haziest Photon is out!

Hi everybody,

On the wrap of the late miniLAC2016@c-base.org Berlin (April 8-10), where this Yet Same Old Qstuff* (continued) workshop babbling of yours truly (slides, videos) was taken place.

There's really one (big) thing to keep in mind, as always: Qtractor is not, never was, meant to be a do-it-all monolith DAW. Quite frankly it isn't a pure modular model either. Maybe we can agree on calling it a hybrid perhaps? And still, all this time, it has been just truthful to its original mission statement--modulo some Qt major version numbers--nb. it started on Qt3 (2005-2007), then Qt4 (2008-2014), it is now Qt5, full throttle.

Now,

It must have been like start saying uh. this is probably the best dot or, if you rather call it that way, beta release of them all!

Qtractor 0.7.7 (haziest photon) is out!

Everybody is here compelled to update.

Leave no excuses behind.

As for the mission statement coined above, you know it's the same as ever was (and it now goes to eleven years in the making):

Qtractor is an audio/MIDI multi-track sequencer application written in C++ with the Qt framework. Target platform is Linux, where the Jack Audio Connection Kit (JACK) for audio and the Advanced Linux Sound Architecture (ALSA) for MIDI are the main infrastructures to evolve as a fairly-featured Linux desktop audio workstation GUI, specially dedicated to the personal home-studio.

Website:

http://qtractor.sourceforge.net

Project page:

http://sourceforge.net/projects/qtractor

Downloads:

http://sourceforge.net/projects/qtractor/files

Git repos:

http://git.code.sf.net/p/qtractor/code
https://github.com/rncbc/qtractor.git
https://gitlab.com/rncbc/qtractor.git
https://bitbucket.org/rncbc/qtractor.git

Change-log:

  • LV2 UI Touch feature/interface support added.
  • MIDI aware plug-ins are now void from multiple or parallel instantiation.
  • MIDI tracks and buses plug-in chains now honor the number of effective audio channels from the assigned audio output bus; dedicated audio output ports will keep default to the stereo two channels.
  • Plug-in rescan option has been added to plug-ins selection dialog (yet another suggestion by Frank Neumann, thanks).
  • Dropped the --enable-qt5 from configure as found redundant given that's the build default anyway (suggestion by Guido Scholz, thanks).
  • Immediate visual sync has been added to main and MIDI clip editor thumb-views (a request by Frank Neumann, thanks).
  • Fixed an old MIDI clip editor contents disappearing bug, which manifested when drawing free-hand (ie. Edit/Select Mode/Edit Draw is on) over and behind its start/beginning position (while in the lower view pane).

Wiki (on going, help wanted!):

http://sourceforge.net/p/qtractor/wiki/

License:

Qtractor is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Flattr this

 

Enjoy && Have fun.

by rncbc at April 27, 2016 06:30 PM

April 23, 2016

digital audio hacks – Hackaday

Color-Changing LED Makes Techno Music

As much as we like addressable LEDs for their obedience, why do we always have to control everything? At least participants of the MusicMaker Hacklab, which was part of the Artefact Festival in February this year, have learned, that sometimes we should just sit down with our electronics and listen.

With the end of the Artefact Festival approaching, they still had this leftover color-changing LED from an otherwise scavenged toy reverb microphone. When powered by a 9 V battery, the LED would start a tiny light show, flashing, fading and mixing the very best out of its three primary colors. Acoustically, however, it spent most of its time in silent dignity.

singing_led_led_anatomy

As you may know, this kind of LED contains a tiny integrated circuit. This IC pulse-width-modulates the current through the light-emitting junctions in preprogrammed patterns, thus creating the colorful light effects.

To give the LED a voice, the participants added a 1 kΩ series resistor to the LED’s “anode”, which effectively translates variations in the current passing through the LED into measurable variations of voltage. This signal could then be fed into a small speaker or a mixing console. The LED expressed its gratitude for the life-changing modification by chanting its very own disco song.

singing_led_hook_up_schematic

This particular IC seems to operate at a switching frequency of about 1.1 kHz and the resulting square wave signal noticeably dominates the mix. However, not everything we hear there may be explained solely by the PWM. There are those rhythmic “thump” noises, shifts in pitch and amplitude of the sound and more to analyze and learn from. Not wanting to spoil your fun of making sense of the beeps and cracks (feel free to spoil as much as you want in the comments!), we just say enjoy the video and thanks to the people of the STUK Belgium for sharing their findings.


Filed under: digital audio hacks, led hacks

by Moritz Walter at April 23, 2016 11:00 AM

April 22, 2016

open-source – CDM Create Digital Music

Hack – listen to one LED create its own micro rave

Surprise: there’s a little tiny rave hiding inside a flickering LED lamp from a toy. Fortunately, we can bring it out – and you can try this yourself with LED circuitry, or just download our sound to remix.

Surprise Super Fun Disco LED Hack from Darsha Hewitt on Vimeo.

Amine Metani arvid
But let’s back up and tell the story of how this began.

The latest edition of our MusicMakers Hacklab brought us to Leuven, Belgium, and the Artefact Festival held at STUK. Now, with all these things, very often people come up with lofty (here, literally lofty) ideas – and that’s definitely half the fun. (We had one team flying an unmanned drone as a musical instrument.)

But sometimes it’s simple little ideas that steal the show. And so it was with a single LED superstar. Amine Mentani brought some plastic toys with flickering lights, and participant Arvid Jense, along my co-facilitator and all-around artist/inventor/magician Darsha Hewitt decided to make a sound experiment with them. They were joined by participant (and once European Space Agency artist resident) Elvire Flocken-Vitez.

It seems that the same timing used to make that faux flickering light effect generates analog voltages that sound, well, amazing. (See more on this technique in comments from readers below.)

DarshaHewitt_LEDHACK_01-1024x640

You might not get as lucky as we did with animated LEDs you find – or you might find something special, it’s tough to say. But you can certainly try it out yourself, following the instructions here and on a little site Darsha set up (or in the picture here).

And by the popular demand of all our Hacklabbers from Belgium, we’ve also made the sound itself available. So, you can try remixing thing, sampling it, dancing to it, whatever.

screenshot_344

https://freesound.org/people/dardi_2000/sounds/343087/

More:

http://www.darsha.org/artwork/disco-led-hack/

And follow our MusicMakers series on Facebook (or stay tuned here to CDM).

The post Hack – listen to one LED create its own micro rave appeared first on CDM Create Digital Music.

by Peter Kirn at April 22, 2016 02:07 PM

GStreamer News

GStreamer Core, Plugins, RTSP Server, Editing Services, Validate 1.8.1 stable release (binaries)

Pre-built binary images of the 1.8.1 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

April 22, 2016 12:00 PM

GStreamer Core, Plugins, RTSP Server, Editing Services, Validate 1.6.4 stable release (binaries)

Pre-built binary images of the 1.6.4 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

April 22, 2016 11:00 AM

April 21, 2016

News – Ubuntu Studio

New Ubuntu Studio Release 16.04 and New Project Lead!

New Project Lead In January 2016 we had an election for a new project lead, and the winner was Set Hallström, who will be taking over the project lead position right after this release. He will be continuing for another two years until the next election in 2018. The team of developers has also seen […]

by Set Hallstrom at April 21, 2016 04:44 PM

April 20, 2016

GStreamer News

GStreamer Core, Plugins, RTSP Server, Editing Services, Python, Validate, VAAPI 1.8.1 stable release

The GStreamer team is pleased to announce the first bugfix release in the stable 1.8 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.8.0. For a full list of bugfixes see Bugzilla.

See /releases/1.8/ for the full release notes.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, or gstreamer-vaapi, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, or gstreamer-vaapi.

April 20, 2016 04:00 PM

OSM podcast

aubio

node-aubio

node.js

node.js logo

Thanks to Gray Leonard, aubio now has its own bindings for node.js.

A fork of Gray's git repo can be found at:

A simple example showing how to extract bpm and pitch from an audio file with node-aubio is included.

To install node-aubio, make sure libaubio is installed on your system, and follow the instructions at npmjs.com.

April 20, 2016 12:28 PM

April 17, 2016

Libre Music Production - Articles, Tutorials and News

New video tutorial describing a complete audio production workflow using Muse and Ardour

Libre Music Production proudly presents Michael Oswalds new 8+ hours video tutorial describing a complete audio production workflow using Muse and Ardour.

In this tutorial you will learn how to import, clean up and edit a MIDI file using MusE. It then goes on to show how to import the MIDI file into Ardour and setting up instruments to play the song. On to guitar recording and audio editing in Ardour, selecting sounds and editing several takes.

The tutorial continues with vocal recording and editing, mixing and mastering the song.

by admin at April 17, 2016 04:19 PM

A complete audio production workflow with Muse and Ardour

Audio production with Muse and Ardour is a 6 part video tutorial showing a complete workflow using FLOSS audio tools.

In this tutorial you will learn how to import, clean up and edit a MIDI file using MusE. It then goes on to show how to import the MIDI file into Ardour and setting up instruments to play the song.

On to guitar recording and audio editing in Ardour, selecting sounds and editing the takes.

The tutorial continues with vocal recording and editing, mixing and mastering the song.

by admin at April 17, 2016 02:51 PM

April 15, 2016

digital audio hacks – Hackaday

Hackaday Dictionary: Ultrasonic Communications

Say you’ve got a neat gadget you are building. You need to send data to it, but you want to keep it simple. You could add a WiFi interface, but that sucks up power. Bluetooth Low Energy uses less power, but it can get complicated, and it’s overkill if you are just looking to send a small amount of data. If your device has a microphone, there is another way that you might not have considered: ultrasonic communications.

clipThe idea of using sound frequencies above the limit of human hearing has a number of advantages. Most devices already have speakers and microphones capable of sending and receiving ultrasonic signals, so there is no need for extra hardware. Ultrasonic frequencies are beyond the range of human hearing, so they won’t usually be audible. They can also be transmitted alongside standard audio, so they won’t interfere with the function of a media device.

A number of gadgets already use this type of communications. The Google Chromecast HDMI dongle can use it, overlaying an ultrasonic signal on the audio output it sends to the TV. It uses this to pair with a guest device by sending a 4-digit code over ultrasound that authorizes it to join an ad-hoc WiFi network and stream content to it. The idea is that, if the device can’t pick up the ultrasound signal, it probably wasn’t invited to the party.

We reported some time ago on an implementation of ultrasonic data using GNU Radio by [Chris]. His writeup goes into a lot of detail on how he set the system up and shows a simple demo using a laptop speaker and microphone. He used Frequency Shift Keying (FSK) to encode the data into the audio, using a base frequency of 23Khz and sending data in five byte packets.

Since then, [Chris] has expanded his system to create a bi-directional system, where two devices communicate bi-directionally using different frequencies. He also changed the modulation scheme to gaussian frequency shift keying for reliability and even added a virtual driver layer on top, so the connection can transfer TCP/IP traffic. Yup, he built an ultrasonic network connection.

His implementation underlines one of the problems with this type of data transmission, though: It is slow. The speed of the data transmission is limited by the ability of the system to transmit and receive the data, and [Chris] found that he needed to keep it slow to work with cheap microphones and speakers. Specifically, he had to keep the number of samples per symbol used by the GFSK modulation high, giving the receiver more time to spot the frequency shift for each symbol in the data stream. That’s probably because the speaker and microphone aren’t specifically designed for this sort of frequency. The system also requires a preamble before each data packet, which adds to the latency of the connection.

So ultrasonic communications may not be fast, but they are harder to intercept than WiFi or other radio frequency signals. Especially if you aren’t looking for them, which inspired hacker [Kate Murphy] to create Quietnet, a simple Python chat system that uses the PyAudio library to send ultrasonic chat messages. For extra security, the system even allows you to change the carrier frequency, which could be useful if the feds are onto you. Whether overt, covert, or just for simple hardware configuration, ultrasonic communications is something to consider playing around with and adding to your bag of hardware tricks.


Filed under: digital audio hacks, Hackaday Columns, wireless hacks

by Richard Baguley at April 15, 2016 05:01 PM

April 14, 2016

GStreamer News

GStreamer 1.6.4 stable release

The GStreamer team is pleased to announce the second bugfix release in the old stable 1.6 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.6.x. For a full list of bugfixes see Bugzilla.

See /releases/1.6/ for the full release notes.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-editing-services, gst-python, or or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-editing-services. gst-python.

April 14, 2016 06:00 PM

Linux – CDM Create Digital Music

A totally free DAW and live environment, built in SuperCollider: LNX_Studio

Imagine you had a DAW with lots of live tools and synths and effects – a bit like FL Studio or Ableton Live – and it was completely free. (Free as in beer, free as in freedom.) That’s already fairly cool. Now imagine that everything in that environment – every synth, every effect, every pattern maker – was built in SuperCollider, the powerful free coding language for electronic music. And imagine you could add your own stuff, just by coding, and it ran natively. That moves from fairly cool to insanely cool. And it’s what you get with LNX_Studio, a free environment that runs on any OS (Mac now, other builds coming), and that got a major upgrade recently. Let’s have a look.

LNX_Studio is a full-blown synth studio. You can do end-to-end production of entire tracks in it, if you choose. Included:

  • Virtual analog synths, effects, drum machines
  • Step sequencers, piano roll (with MIDI import), outboard gear control
  • Mix engine and architecture
  • Record audio output
  • Automation, presets, and programs (which with quick recall make this a nice idea starter or live setup
  • Chord library, full MIDI output and external equipment integration

It’s best compared to the main view of FL Studio, or the basic rack in Reason, or the devices in Ableton Live, in that the focus is building up songs through patterns and instruments and effects. What you don’t get is audio input, multitracking, or that sort of linear arrangement. Then again, for a lot of electronic music, that’s still appealing – and you could always combine this with something like Ardour (to stay in free software) when it’s time to record tracks.

Also good in this age of external gear lust, all those pattern generators and MIDI control layouts play nice with outboard gear. There’s even an “external device” which you can map to outboard controls.

But all of this you can do in other software. And it’d be wrong to describe LNX_Studio as a free, poor man’s version of that gear, because it can do two things those tools can’t.

First, it’s entirely networked. You can hop onto a local network or the Internet and collaborate with other users. (Theoretically, anyway – I haven’t gotten to try this out yet, but the configuration looks dead simple.)

Second, and this I did play with, you can write your own synths and effects in SuperCollider and run them right in the environment. And unlike environments like Max for Live, that integration is fully native to the tool. You just hop right in, add some code, and go. To existing SuperCollider users, this is finally an integrated environment for running all your creations. To those who aren’t, this might get you hooked.

Here’s a closer look in pictures:

When you first get started, you're presented with a structured environment to add instruments, effects, pattern generators, and so on.

When you first get started, you’re presented with a structured environment to add instruments, effects, pattern generators, and so on.

Fully loaded, the environment resembles portions of FL Studio or Ableton Live. You get a conventional mixer display, and easy access to your tools.

Fully loaded, the environment resembles portions of FL Studio or Ableton Live. You get a conventional mixer display, and easy access to your tools.

Oh, yeah, and out of the box, you get some powerful, nice-sounding virtual analog synths.

Oh, yeah, and out of the box, you get some powerful, nice-sounding virtual analog synths.

But here's the powerful part - inside every synth is SuperCollider code you can easily modify. And you can add your own code using this powerful, object-oriented, free and open source code environment for musicians.

But here’s the powerful part – inside every synth is SuperCollider code you can easily modify. And you can add your own code using this powerful, object-oriented, free and open source code environment for musicians.

Effects can use SuperCollider code, too. There's also a widget library, so adding a graphical user interface is easy.

Effects can use SuperCollider code, too. There’s also a widget library, so adding a graphical user interface is easy.

But whether you're ready to code or not doesn't matter much - there's a lot to play with either way. Sequencers...

But whether you’re ready to code or not doesn’t matter much – there’s a lot to play with either way. Sequencers…

Drum machines...

Drum machines…

More instruments...

More instruments…

You also get chord generators and (here) a piano roll editor.

You also get chord generators and (here) a piano roll editor.

When you're ready to play with others, there's also network capability for jamming in the same room or over a network (or the Internet).

When you’re ready to play with others, there’s also network capability for jamming in the same room or over a network (or the Internet).

Version 2.0 is just out, and adds loads of functionality and polish. Most importantly, you can add your own sound samples, and work with everything inside a mixer environment with automation. Overview of the new features (in case you saw the older version):

Main Studio
Channel style Mixer
Programs (group & sequence Instrument presets)
Automation
Auto fade in/out
Levels dispay
Synchronise channels independently
Sample support in GS Rhythm & SCCode instruments
WebBrowser for importing samples directly from the internet
Local sample support
Sample Cache for off-line use
Bum Note
Now polyphonic
Added Triangle wave & Noise
High Pass filter
2 Sync-able LFO’s
PWM
Melody Maker module (chord progressions, melodies + hocket)
Inport MIDI files
Audio In
Support for External instruments & effects
Interfaces for Moog Sub37, Roland JP-08, Korg Volca series
Many new instruments & effects added to SCCode & SCCodeF

I love what’s happening with Eurorack and hardware modular – and there’s nothing like physical knobs and cables. But that said, for anyone who brags that modular environments are a “clean slate” and open environment, I think they’d do well to look at this, too. The ability to code weird new instruments and effects to me is also a way to find originality. And since not everyone can budget for buying hardware, you can run this right now, on any computer you already own, for free. I think that’s wonderful, because it means all you need is your brain and some creativity. And that’s a great thing.

Give the software a try:

http://lnxstudio.sourceforge.net

And congrats to Neil Cosgrove for his work on this – let’s send some love and support his way.

The post A totally free DAW and live environment, built in SuperCollider: LNX_Studio appeared first on CDM Create Digital Music.

by Peter Kirn at April 14, 2016 05:05 PM

blog4

Tina Mariane Krogh Madsen: Body Interfaces: A Processual Scripting

TMS member Tina Mariane Krogh Madsen going to show a week of a durational performative installation with guests, in Berlin at Galerie Grüntaler 9 (at Grüntaler Strasse 9 as the name suggests) from 15. - 22. April:

Body Interfaces: A Processual Scripting is a performative installation generated by Tina Mariane Krogh Madsen over the duration of one week. It wishes to raise questions regarding the role of documentation in artistic research, its status and how it can feed into other processes.
In the spatial frames of Grüntaler9 the artist will be intensively working with and redeveloping her own concept of an archive and resources based on the documents and remains from previous performances and interventions, which will additionally be resulting in other performance structures.
The installation is in an ongoing process that can be witnessed everyday from 2-8pm. On selected days there will be guests invited to discuss and perform with the artist in the space.
::::::::: Tina Mariane Krogh Madsen’s research works with the body and (as) materiality via combining understandings of it that are derived from site-specific performance art and from working with technology.
A crucial part of this research takes the form of interventions and performances, collectively titled Body Interfaces, first generated during a residency in Iceland (May, 2015) and since then developed and performed in various contexts, constantly challenging their own format and method. These practices deal with the body as interface for experience and communication in relation to other materialities as well as the environment that surrounds and interacts with these. The interface is here read as a transmitting entity and agency between the body and the surrounding surfaces. An important part of Body Interfaces is its own documentation, in various formats, shapes and scripted entities.
The processual installation is open daily from 14:00 until 20:00 and can be witnessed at all time. The processual scripting has a dynamic approach to the space and therefore the installation will arrive and evolve throughout the days, nothing has been in installed in advance – all is part of the process.
The research topic will be shared through performances and interventions as well as an ongoing reel of performance documentation.
Friday April 15: inauguration and installation:
- 14:00h - 19:00h: performative installation (working session)
- 20:00h - 20:30: Body Interfaces Performance
- from 21:00: Fridäy Süpperclüb (food and drinks by donation)
Saturday April 16: sound (research collaborator: Malte Steiner):
- 14:00h - 19:00h: performative installation (working session)
- 19:00h: sound performance
Sunday April 17: body and site (research collaborator: Nathalie Fari):
- 14:00 - 17:00: performative installation (working session)
- 17:00 - 20:00: performance interventions with Nathalie Fari
Monday April 18: archiving as practice / restructuring and re-contextualizing materials (research collaborator: Joel Verwimp):
14:00h - 20:00h: performative installation with performance interventions (working session)
Tuesday April 19: chance as method – invigorating performative structures:
- 14:00h - 20:00h: performative installation with performance interventions (working session)
Wednesday April 20: instruction: re-performance / transformation I (research collaborator: Aleks Slota):
- 14:00h - 20:00h: performative installation with performance interventions (working session)
Thursday April 21: ritual(s)
- 14:00h - 20:00h: performative installation with performance interventions (working session)
Friday April 22: instruction: re-performance / transformation II (research collaborator: Ilya Noé):
- 14:00h - 18:00h: performative installation with performance interventions (working session)
- 20:00h: Body Interfaces Processual Scripting Resume
- from 21:00: Fridäy Süpperclüb (food and drinks by donation)





::::::::: Performative schedule for April 15. to 22.:

by herrsteiner (noreply@blogger.com) at April 14, 2016 03:32 PM

April 11, 2016

OpenAV

Fabla2 @ miniLAC video!

In an amazingly short time, the streaming videos of miniLAC are online!! OpenAV’s Fabla2 video linked here, for other streaming links, checkout https://media.ccc.de/v/minilac16-openav. Huge thanks to the Stream-Team for their amazing work! Read more →

by harry at April 11, 2016 09:11 AM

April 06, 2016

Libre Music Production - Articles, Tutorials and News

The Qstuff* Spring'16 Release Frenzy

The Qstuff* Spring'16 Release Frenzy

On the wake of the miniLAC2016@c-base.org Berlin, and keeping up with tradition, the most venerable of the Qstuff* are under so called Spring'16 release frenzy.

Enjoy the party!

by yassinphilip at April 06, 2016 05:01 PM

April 05, 2016

OpenAV

miniLAC 2016!

miniLAC 2016!

Hey, its miniLAC this weekend! Are you near Berlin? You should attend, latest and greatest LinuxAudio demos, software, and meet the community! Checkout the schedule here, OpenAV is running a workshop on Fabla2 – showcasing the advanced features of Fabla2, making it suitable for live-performance, studio grade drums, and lots of fun with the new hardware integration for the Maschine… Read more →

by harry at April 05, 2016 07:35 PM

rncbc.org

Qtractor 0.7.6 - A Hazier Photon is released!


Hey, Spring'16 release frenzy isn't over as of just yet ;)

Keeping up with the tradition,

Qtractor 0.7.6 (a hazier photon) is released!

Qtractor is an audio/MIDI multi-track sequencer application written in C++ with the Qt framework. Target platform is Linux, where the Jack Audio Connection Kit (JACK) for audio and the Advanced Linux Sound Architecture (ALSA) for MIDI are the main infrastructures to evolve as a fairly-featured Linux desktop audio workstation GUI, specially dedicated to the personal home-studio.

Flattr this

Website:

http://qtractor.sourceforge.net

Project page:

http://sourceforge.net/projects/qtractor

Downloads:

http://sourceforge.net/projects/qtractor/files

Git repos:

http://git.code.sf.net/p/qtractor/code
https://github.com/rncbc/qtractor

Change-log:

  • Plug-ins search path and out-of-process (aka. dummy) VST plug-in inventory scanning has been heavily refactored.
  • Fixed and optimized all dummy processing for plugins with more audio inputs and/or outputs than channels on a track or bus where it's inserted.
  • Fixed relative/absolute path mapping when saving/loading custom LV2 Plug-in State Presets.

Wiki (on going, help wanted!):

http://sourceforge.net/p/qtractor/wiki/

License:

Qtractor is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

 

Enjoy && Keep the fun, always.

by rncbc at April 05, 2016 06:30 PM

The Qstuff* Spring'16 Release Frenzy

On the wake of the miniLAC2016@c-base.org Berlin, and keeping up with tradition, the most venerable of the Qstuff* are under so called Spring'16 release frenzy.

Enjoy the party!

Details are as follows...

 

QjackCtl - JACK Audio Connection Kit Qt GUI Interface

QjackCtl 0.4.2 (spring'16) released!

QjackCtl is a(n ageing but still) simple Qt application to control the JACK sound server, for the Linux Audio infrastructure.

Website:
http://qjackctl.sourceforge.net
Downloads:
http://sourceforge.net/projects/qjackctl/files

Git repos:

http://git.code.sf.net/p/qjackctl/code
https://github.com/rncbc/qjackctl

Change-log:

  • Added a brand new "Enable JACK D-BUS interface" option, split from the old common "Enable D-BUS interface" setup option which now refers to its own self D-BUS interface exclusively.
  • Dropped old "Start minimized to system tray" option from setup.
  • Add double-click action (toggle start/stop) to systray (a pull request by Joel Moberg, thanks).
  • Added application keywords to freedesktop.org's AppData.
  • System-tray icon context menu has been fixed/hacked to show up again on Plasma 5 (aka. KDE5) notification status area.
  • Switched column entries in the unified interface device combo-box to make it work for macosx/coreaudio again.
  • Blind fix to a FTBFS on macosx/coreaudio platforms, a leftover from the unified interface device selection combo-box inception, almost two years ago.
  • Prevent x11extras module from use on non-X11/Unix plaforms.
  • Late French (fr) translation update (by Olivier Humbert, thanks).

License:

QjackCtl is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Flattr this

 

Qsynth - A fluidsynth Qt GUI Interface

Qsynth 0.4.1 (spring'16) released!

Qsynth is a FluidSynth GUI front-end application written in C++ around the Qt framework using Qt Designer.

Website:
http://qsynth.sourceforge.net
Downloads:
http://sourceforge.net/projects/qsynth/files

Git repos:

http://git.code.sf.net/p/qsynth/code
https://github.com/rncbc/qsynth

Change-log:

  • Dropped old "Start minimized to system tray" option from setup.
  • CMake script lists update (patch by Orcan Ogetbil, thanks).
  • Added application keywords to freedesktop.org's AppData.
  • System-tray icon context menu has been fixed/hacked to show up again on Plasma 5 (aka. KDE5) notifications status area.
  • Prevent x11extras module from use on non-X11/Unix plaforms.
  • Messages standard output capture has been improved in both ways a non-blocking pipe may get.
  • Regression fix for invalid system-tray icon dimensions reported by some desktop environment frameworks.

License:

Qsynth is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Flattr this

 

Qsampler - A LinuxSampler Qt GUI Interface

Qsampler 0.4.0 (spring'16) released!

Qsampler is a LinuxSampler GUI front-end application written in C++ around the Qt framework using Qt Designer.

Website:
http://qsampler.sourceforge.net
Downloads:
http://sourceforge.net/projects/qsampler/files

Git repos:

http://git.code.sf.net/p/qsampler/code
https://github.com/rncbc/qsampler

Change-log:

  • Added application keywords to freedesktop.org's AppData.
  • Prevent x11extras module from use on non-X11/Unix plaforms.
  • Messages standard output capture has been improved again, now in both ways a non-blocking pipe may get.
  • Single/unique application instance control adapted to Qt5/X11.

License:

Qsampler is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Flattr this

 

QXGEdit - A Qt XG Editor

QXGEdit 0.4.0 (spring'16) released!

QXGEdit is a live XG instrument editor, specialized on editing MIDI System Exclusive files (.syx) for the Yamaha DB50XG and thus probably a baseline for many other XG devices.

Website:
http://qxgedit.sourceforge.net
Downloads:
http://sourceforge.net/projects/qxgedit/files

Git repos:

http://git.code.sf.net/p/qxgedit/code
https://github.com/rncbc/qxgedit

Change-log:

  • Prevent x11extras module from use on non-X11/Unix plaforms.
  • French (fr) translations update (by Olivier Humbert, thanks).
  • Fixed port on MIDI 14-bit controllers input caching.

License:

QXGEdit is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Flattr this

 

QmidiCtl - A MIDI Remote Controller via UDP/IP Multicast

QmidiCtl 0.4.0 (spring'16) released!

QmidiCtl is a MIDI remote controller application that sends MIDI data over the network, using UDP/IP multicast. Inspired by multimidicast (http://llg.cubic.org/tools) and designed to be compatible with ipMIDI for Windows (http://nerds.de). QmidiCtl has been primarily designed for the Maemo enabled handheld devices, namely the Nokia N900 and also being promoted to the Maemo Package repositories. Nevertheless, QmidiCtl may still be found effective as a regular desktop application as well.

Website:
http://qmidictl.sourceforge.net
Downloads:
http://sourceforge.net/projects/qmidictl/files

Git repos:

http://git.code.sf.net/p/qmidictl/code
https://github.com/rncbc/qmidictl

Change-log:

  • Added application keywords to freedesktop.org's AppData.

License:

QmidiCtl is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Flattr this

 

QmidiNet - A MIDI Network Gateway via UDP/IP Multicast

QmidiNet 0.4.0 (spring'16) released!

QmidiNet is a MIDI network gateway application that sends and receives MIDI data (ALSA-MIDI and JACK-MIDI) over the network, using UDP/IP multicast. Inspired by multimidicast and designed to be compatible with ipMIDI for Windows.

Website:
http://qmidinet.sourceforge.net
Downloads:
http://sourceforge.net/projects/qmidinet/files

Git repos:

http://git.code.sf.net/p/qmidinet/code
https://github.com/rncbc/qmidinet

Change-log:

  • Allegedly fixed for the socketopt(IP_MULTICAST_LOOP) reverse semantics on Windows platforms (as suggested by Paul Davis, from Ardour ipMIDI implementation, thanks).
  • Added application keywords to freedesktop.org's AppData.

License:

QmidiNet is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Flattr this

 

Enjoy && keep the fun, always!

by rncbc at April 05, 2016 05:30 PM

April 03, 2016

Pid Eins

Announcing systemd.conf 2016

Announcing systemd.conf 2016

We are happy to announce the 2016 installment of systemd.conf, the conference of the systemd project!

After our successful first conference 2015 we’d like to repeat the event in 2016 for the second time. The conference will take place on September 28th until October 1st, 2016 at betahaus in Berlin, Germany. The event is a few days before LinuxCon Europe, which also is located in Berlin this year. This year, the conference will consist of two days of presentations, a one-day hackfest and one day of hands-on training sessions.

The website is online now, please visit https://conf.systemd.io/.

Tickets at early-bird prices are available already. Purchase them at https://ti.to/systemdconf/systemdconf-2016.

The Call for Presentations will open soon, we are looking forward to your submissions! A separate announcement will be published as soon as the CfP is open.

systemd.conf 2016 is a organized jointly by the systemd community and kinvolk.io.

We are looking for sponsors! We’ve got early commitments from some of last year’s sponsors: Collabora, Pengutronix & Red Hat. Please see the web site for details about how your company may become a sponsor, too.

If you have any questions, please contact us at info@systemd.io.

by Lennart Poettering at April 03, 2016 10:00 PM

Midichlorians in the blood

Taking Back From Android



Android is an operating system developed by Google around the Linux kernel. It is not like any other Linux distribution, because not only many common subsystems have been replaced by other components, but also the user interface is radically different based on Java language running into a virtual machine called Dalvik.

An example of subsystem removed from the Linux kernel is the ALSA Sequencer, which is a key piece for MIDI input/output with routing and scheduling that makes Linux comparable in capabilities to Mac OSX for musical applications (for musicians, not whistlers) and years ahead of Microsoft Windows in terms of infrastructure. Android did not offer anything comparable until Android 6 (Marshmallow).

Another subsystem from userspace Linux not included in Android is PulseAudio. Instead, OpenSL ES that can be found on Android for digital audio output and input.

But Android also has some shining components. One of them is Sonivox EAS (originally created by Sonic Network, Inc.) released under the Apache 2 license, and the MIDI Synthesizer used by my VMPK for Android application to produce noise. Funnily enough, it provided some legal fuel to Oracle in its battle against Google, because of some Java binding sources that were included in the AOSP repositories. It is not particularly outstanding in terms of audio quality, but has the ability of providing real time wavetable GM synthesis without using external soundfont files, and consumes very little resources so it may be indicated for Linux projects on small embedded devices. Let's take it to Linux, then!

So the plan is: for the next Drumstick release, there will be a Drumstick-RT backend using Sonivox EAS. The audio output part is yet undecided, but for Linux will probably be PulseAudio. In the same spirit, for Mac OSX there will be a backend leveraging the internal Apple DLS synth. These backends will be available in addition to the current FluidSynth one, which provides very good quality, but uses expensive floating point DSP calculations and requires external soundfont files.

Meanwhile, I've published on GitHub this repository including a port of Sonivox EAS for Linux with ALSA Sequencer MIDI input and PulseAudio output. It also  depends on Qt5 and Drumstick. Enjoy!

Sonivox EAS for Linux and Qt:
https://github.com/pedrolcl/Linux-SonivoxEas

Related Android project:
https://github.com/pedrolcl/android/tree/master/NativeGMSynth

by Pedro Lopez-Cabanillas (noreply@blogger.com) at April 03, 2016 04:59 PM

March 31, 2016

digital audio hacks – Hackaday

The ATtiny MIDI Plug Synth

MIDI was created over thirty years ago to connect electronic instruments, synths, sequencers, and computers together. Of course, this means MIDI was meant to be used with computers that are now thirty years old, and now even the tiniest microcontrollers have enough processing power to take a MIDI signal and create digital audio. [mitxela]’s polyphonic synth for the ATtiny 2313 does just that, using only two kilobytes of Flash and fitting inside a MIDI jack.

Putting a MIDI synth into a MIDI plug is something we’ve seen a few times before. In fact, [mitxela] did the same thing a few months ago with an ATtiny85, and [Jan Ostman]’s DSP-G1 does the same thing with a tiny ARM chip. Building one of these with an ATtiny2313 is really pushing the envelope, though. With only 2 kB of Flash memory and 128 bytes of RAM, there’s not a lot of space in this chip. Making a polyphonic synth plug is even harder.

The circuit for [mitxela]’s chip is extremely simple, with power and MIDI data provided by a MIDI keyboard, a 20 MHz crystal, and audio output provided eight digital pins summed with a bunch of resistors. Yes, this is only a square wave synth, and the polyphony is limited to eight channels. It works, as the video below spells out.

Is it a good synth? No, not really. By [mitxela]’s own assertion, it’s not a practical solution to anything, the dead bug construction takes an hour to put together, and the synth itself is limited to square waves with some ugly quantization, at that. It is a neat exercise in developing unique audio devices and especially hackey, making it a very cool build. And it doesn’t sound half bad.


Filed under: ATtiny Hacks, digital audio hacks, musical hacks

by Brian Benchoff at March 31, 2016 05:00 AM

blog4

March 29, 2016

GStreamer News

GStreamer Core, Plugins, RTSP Server, Editing Services, Validate 1.8.0 stable release (binaries)

Pre-built binary images of the 1.8.0 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

March 29, 2016 10:00 AM

March 27, 2016

Libre Music Production - Articles, Tutorials and News

Petigor's Tale used Audacity for sound recording

Petigor's Tale used Audacity for sound recording

When the authors of Petigor's Tale, a game developed using Blend4Web, wanted to record and edit sound effects for their upcoming game, their choice fell on Audacity.

Read their detailed blog entry about how the editing and recording was made.

by admin at March 27, 2016 08:38 PM

March 26, 2016

Libre Music Production - Articles, Tutorials and News

DrumGizmo version 0.9.9

DrumGizmo version 0.9.9 is just out!

Highlighted changes / fixes:
 - Switch to LGPLv3
 - Linux VST
 - Embedded UI
 - Prepped for diskstreaming (but not yet implemented in UI)
 - Loads of bug fixes

Read the ChangeLog file for the full list of changes

Project Page
http://www.drumgizmo.org

by yassinphilip at March 26, 2016 06:20 PM

March 24, 2016

GStreamer News

GStreamer Core, Plugins, RTSP Server, Editing Services, Python, Validate, VAAPI 1.8.0 stable release

The GStreamer team is proud to announce a new major feature release in the stable 1.x API series of your favourite cross-platform multimedia framework!

This release has been in the works for half a year and is packed with new features, bug fixes and other improvements.

See /releases/1.8/ for the full list of changes.

Binaries for Android, iOS, Mac OS X and Windows will be provided shortly after the source release by the GStreamer project during the stable 1.8 release series.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, or gstreamer-vaapi, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, or gstreamer-vaapi.

March 24, 2016 10:00 AM

Libre Music Production - Articles, Tutorials and News

AV Linux 2016: The Release

AV Linux 2016: The Release

With this release, Glen is moving away from the 'everything but the kitchen sink' approach and instead is focusing on providing a very stable base suitable for low latency audio production.

by yassinphilip at March 24, 2016 05:43 AM

March 23, 2016

Libre Music Production - Articles, Tutorials and News

Ardour 4.7 released

Ardour 4.7 released

Ardour 4.7 is now available, including a variety of improvements and minor bug fixes. The two most significant changes are:

by yassinphilip at March 23, 2016 11:02 AM

Linux Audio Users & Musicians Video Blog

Come Around – Evergreen

This is a music video of a song recorded/mixed/mastered using Linux
(AV Linux 2016) with Harrison Mixbus 3.1 along with some Calf and linuxDSP
Plugins. This is also the first production from our new ‘Bandshed’ studio
and will be released as part of a full EP in a month or so. The band
‘Evergreen’ is the band my son drums in and ‘Come Around’ is an original
song written by the singer.



by DJ Kotau at March 23, 2016 07:04 AM