planet.linuxaudio.org

January 16, 2017

open-source – CDM Create Digital Music

Send MIDI messages faster than ever, right from the command line

Quick! Send a MIDI control change message! Or some obscure parameter!

Well, sometimes typing something is the easiest way to do things. And that’s why Geert Bevin’s new, free and open source tool SendMIDI is invaluable. Sorry to nerd out completely here, but I suspect this is going to be way more relevant to my daily life than anything coming out of NAMM this week.

In this case, whether you know much about how to use a command line or not, there’s almost certainly no faster way of performing basic MIDI tasks. Anyone working with hardware is certain to want one. (Someone I suspect will make their own little standalone MIDI tool by connecting a Raspberry Pi to a little keyboard and carry it around like a MIDI terminal.)

The commands are simple and obvious and easy to remember once you try them. Installation is dead-simple. Every OS is supported – build it yourself, install with Homebrew on macOS, or – the easiest method – grab a pre-built binary for Windows, Mac, or Linux.

And now with version 1.0.5, the whole thing is eminently usable and supports more or less the entire MIDI spec, minus MIDI Time Code (which you wouldn’t want to send this way anyway).

So, now troubleshooting, sending obscure parameter changes, and other controls is simpler than ever. It’s a must for hardware lovers.

Developers, that support for all operating systems is also evidence of how easy the brilliant open source C++ JUCE framework makes building. The ProJucer tool does all the magic. “But wait, I thought JUCE was for making ugly non-native GUIs,” I’m sure some people are saying. No, actually, that’s wrong on two counts. One, JUCE doesn’t necessarily have anything to do with GUIs; it’s a full-featured multimedia framework focused on music, and this tool shows your end result might not have a GUI at all. Two, if you’ve seen an ugly UI, that’s the developer’s fault, not JUCE’s – and very often you’ve seen beautiful GUIs built in JUCE, but as a result didn’t know that’s how they were built.

But anyone should grab this, seriously.

https://github.com/gbevin/SendMIDI

The post Send MIDI messages faster than ever, right from the command line appeared first on CDM Create Digital Music.

by Peter Kirn at January 16, 2017 04:30 PM

Libre Music Production - Articles, Tutorials and News

January 12, 2017

GStreamer News

GStreamer 1.11.1 unstable release

The GStreamer team is pleased to announce the first release of the unstable 1.11 release series. The 1.11 release series is adding new features on top of the 1.0, 1.2, 1.4, 1.6, 1.8 and 1.10 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework. The unstable 1.11 release series will lead to the stable 1.12 release series in the next weeks. Any newly added API can still change until that point.

Full release notes will be provided at some point during the 1.11 release cycle, highlighting all the new features, bugfixes, performance optimizations and other important changes.

Binaries for Android, iOS, Mac OS X and Windows will be provided in the next days.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

January 12, 2017 03:00 PM

January 11, 2017

Scores of Beauty

Working With Word

I have regularly described the advantages of working with LilyPond and version control on this blog (for example in an ironic piece on things one would “miss” most with version control), but this has always been from the perspective of not having to use tools like Finale or MS Word. Now I have been hit by such a traditional workflow and feel the urge to comment on it from that perspective …

Recently I had to update a paper presentation for the printed publication, and for that purpose I was given a document with editorial guidelines. Of course I wasn’t surprised that a Word document was expected, but some of the other regulations struck me as particularly odd. Why on earth should I use a specific font family and size for body text and another specific size for footnotes? The Word document won’t be printed directly, so my font settings most surely won’t end up in the final PDF anyway because they are superseded by some style definition in some professional DTP program. Well, I could of course simply ignore that as it doesn’t actually interfere with my work. But it is such a typical symptom of academics not having properly reflected on their tools that I can’t help but make that comment.

Of course there are many reasonable rules, for example about abbreviations, but I must say when guidelines ask me to use italics for note names and super/subscript (without italics) for octave indications I feel almost personally insulted. Wow! All the authors of this compiled volume are expected to keep their formatting consistent, without even resorting to style sheets. I really wouldn’t want to be the typesetting person having to sort out that mess. What if I wanted to discern between different types of highlighting and give them differentiated appearance, for example by printing author names in small caps and note names in a semibold font variant? And what about consistently applying or not applying full and half spaces around hyphens, or rather around hyphens, en-dashes or em-dashes (hm, who is going to sort this out, given that many authors don’t know the difference and simply let their word processor do the job)?
But guess what? When I complained and suggested to at least provide a Word template with all the style sheets prepared the reply I got was one of unashamed perplexity – they didn’t really understand what I was talking about. I would like to stress that I do not mean this personally (in case someone realizes I’m talking about them …), but this was a moment that got me thinking: If I have to “professionally” discuss these matters on such a basic level how could I ever hope to get musicologists to embrace the world of text based music editing?

Well, I forced myself to stay calm and write the essay in LibreOffice (although I plead guilty to using stylesheets anyway). Just as expected, writing without being able to properly manage the document with Git feels really awkward. Being able to quickly switch “sessions” to work on another part or just to draft around without spoiling the previous state has become a second nature to me, and I really missed that. On the other hand I have to admit that being able to retrace one’s steps or selectively undoing arbitrary edits is something one rarely needs in practice. But knowing it would be possible really makes a difference and isn’t just a fancy or even nerdy idea. Think of the airbags built into your car: you will rarely if ever actually “use” them, but you wouldn’t want to drive without anymore once you have got accustomed to this level of safety. But eventually I completed the text and submitted the Word document, giving things out of hand and letting “them” deal with the crappy stuff. Unfortunately things weren’t over yet.

After having submitted my paper I learned about another paper going into the proceedings that I would like to cross-reference. So I asked the editor if I could update my paper and he said yes, there’s still time for that. But now I’m somewhat at a loss because I ended up in exactly the situation I’ve always made fun of. What if I simply send an updated document? Oops, I can’t tell if the editor has already modified my initial submission – in that case he’d have to manually identify the changes and apply them to his copy. And maybe he’d even have to adjust my updates if they should conflict with any changes he has already made. Ah, word processors have this “trace changes” option (or whatever it’s called in English programs), shouldn’t this solve the issue? Hm, partially: I have to make sure that I only apply these changes in this session so the editor has a chance to identify them properly. Then he still has to apply them manually, so this approach still includes a pretty high risk of errors happening. OK, maybe I should simply describe the intended changes in an email? Oh my, this requires me to actually keep track of the changes myself so I can later refer to the “original” state in my message. Probably I’ll have to look into that original document state, maybe from the “sent” box of my email client? And all these options don’t take into account that when the document will eventually be considered final the editor will be sitting on a pile of copies and has to be particularly careful which version to pass on to the typesetter …

Oh my, life as an author is just so much infinitely easier when your documents can healthily evolve under version control and when you can discuss and apply later edits through branches and pull requests, pointing others to exactly the changes you have applied. Of course the comparison is as lame as any comparison but to me having to develop a text with Word feels like being booked for a piano recital, showing up in the venue and instead of a Steinway D finding this on stage:


In about a month I will have to submit yet another Word document for proceedings of another conference. I realize that I won’t change the world – at least not immediately – and this awkward request will come again. But I’m not exactly looking forward to repeating the experience of being restricted to Word, so I think I will definitely take the plunge and find a proper solution this time. Pandoc will allow me to write the text in its extended Markdown syntax and just export to Word when I’m ready to submit. Until then I have all the benefits of plain text, version control above all, but also the option of editing the document through the web interface and all the other advantages of semantic markup. Maybe I can even convince the (next) editor to stick to the Markdown representation for longer so we can do the review in Gitlab and only convert the Markdown to Word for the typesetter? Oh, wait: Pandoc should be able to export the Markdown to something the typesetter can directly import into InDesign, which would allow us (well, me) to completely avoid the intermediate step of a Word document.

Wouter Soudan makes a strong point about this in his post From Word to Markdown to InDesign. His workflow is actually the other way round as it is looked at from the typesetter’s perspective: he uses Markdown as an intermediate state to clean up the potentially messy Word documents submitted by authors and to create a smooth pipeline to get things into InDesign – or optionally LaTeX.

I have only scratched the surface so far, preparing the script and slides for a (yet another) presentation together with Jan-Peter Voigt. He has set up a toolchain using Pandoc to convert our Markdown content files to PDF piped through LaTeX. I will definitely investigate this path further as it feels really good and is efficient, and maybe this can be expanded to a smooth environment for authoring texts including music examples. Stay tuned for any further reports …

by Urs Liska at January 11, 2017 12:00 PM

January 09, 2017

aubio

0.4.4 released

A new version of aubio, 0.4.4, is available.

This version features a new log module that allows redirecting errors, warnings, and other messages coming from libaubio. As usual, these messages are printed to stderr or stdout by default.

Another new feature is the --minioi feature added to aubioonset, which lets you adjust the minimum Inter-Onset Interval (IOI) separating two consecutive events. This makes it easier to reduce the number of doubled detections.

New demos have been added to the python/demos folder, including one using the pyaudio module to read samples from the microphone in real time.

0.4.4 also comes with a bunch of fixes, including typos in the documentation, build system improvements, optimisations, and platform compatibility.

read more after the break...

January 09, 2017 03:35 PM

January 07, 2017

The Penguin Producer

Composition in Storytelling

During the “Blender for the 80s” series, I went into some of the basics of visual composition.  In and of itself, it does well enough to give one a basic glimpse, but it’s really important to understand composition in and of itself. Composition is a key element to any visual …

by Lampros Liontos at January 07, 2017 07:00 AM

January 05, 2017

KXStudio News

Carla 2.0 beta5 is here!

Hello again everyone, we're glad to bring you the 5th beta of the upcoming Carla 2.0 release.
It has been more than 1 year since the last Carla release, this release fixes things that got broken in the mean time and continues the work towards Carla's 2.0 base features.
There's quite a lot of changes under the hood, mostly bugfixes and minor but useful additions.
With that being said, here are some of the highlights:

carla-control

Carla-Control is back!

Carla-Control is an application to remotely control a Carla instance via network, using OSC messages.
It stopped working shortly after Carla's move to 2.x development, but now it's back, and working a lot better.
Currently works on Linux and Mac OS.


logs-tab

Logs tab

This was also something that was brought back in this release.
It was initially removed from the 2.x series because it did not work so well.
Now the code has been fixed up and brought to life.

You can disable it in the settings if you prefer your messages to go to the console as usual.
Sadly this does not work on Windows just yet, only for Linux and Mac OS.
But for Windows a Debug/Carla.exe file is included in this build (after you extract the exe as zip file), which can be used to see the console window.


midi-pattern

MIDI Sequencer is dead, long live MIDI Pattern!

The internal MIDI Sequencer plugin was renamed to MIDI Pattern, and received some needed attention.
Some menu actions and parameters were added, to make it more intuitive to use.
It's now exported as part of the Carla-LV2 plugins package, and available for Linux and Mac OS.


More stuff

  • Add carla-jack-single/multi startup tools
  • Add 16 channel and 2+1 (sidechain) variant to Carla-Patchbay plugins
  • Add new custom menu when right-clicking empty rack & patchbay areas
  • Add command-line option for help and version arguments
  • Add command-line option to run Carla without UI (requires project file)
  • Add X11 UI to Carla-LV2
  • Remove MVerb internal plugin (conflicting license)
  • Remove Nekofilter internal plugin (use fil4.lv2 instead)
  • Implement plugin bridges for Mac OS and Windows
  • Implement Carla-LV2 MIDI out
  • Implement initial latency code, used for aligned dry/wet sound for now
  • Implement support for VST shell plugins under Linux
  • Implement sorting of LV2 scale points
  • Allow to scan and load 32bit AUs under Mac OS
  • Allow using the same midi-cc in multiple parameters for the same plugin
  • Allow Carla-VST to be built with Qt5 (Linux only)
  • Bypass MIDI events on carla-rack plugin when rack is empty
  • Find plugin binary when saved filename doesn't exist
  • Force usage of custom theme under Mac OS
  • New option to wherever put UIs on top of carla (Linux only)
  • Make canvas draggable with mouse middle-click
  • Make it possible to force-refresh scan of LV2 and AU plugins
  • Plugin settings (force stereo, send CC, etc) are now saved in the project file
  • Renaming plugins under JACK driver mode now keeps the patchbays connections
  • Update modgui code for latest mod-ui, supports control outputs now
  • Lots and lots of bug fixes.

There will still be 1 more beta release before going for a release candidate, so expect more cool stuff soon!

Special Notes

  • Carla as plugin is still not available under Windows, to be done for the next beta.

Downloads

To download Carla binaries or source code, jump into the KXStudio downloads section.
If you're using the KXStudio repositories, you can simply install "carla-git" (plus "carla-lv2" and "carla-vst" if you're so inclined).
Bug reports and feature requests are welcome! Jump into the Carla's Github project page for those.

by falkTX at January 05, 2017 10:23 PM

January 04, 2017

drobilla.net

Jalv 1.6.0

jalv 1.6.0 has been released. Jalv is a simple but fully featured LV2 host for Jack which exposes plugin ports to Jack, essentially making any LV2 plugin function as a Jack application. For more information, see http://drobilla.net/software/jalv.

Changes:

  • Support CV ports if Jack metadata is enabled (patch from Hanspeter Portner)
  • Fix unreliable UI state initialization (patch from Hanspeter Portner)
  • Fix memory error on preset save resulting in odd bundle names
  • Improve preset support
  • Support numeric and string plugin properties (event-based control)
  • Support thread-safe state restoration
  • Update UI when internal plugin state is changed during preset load
  • Add generic Qt control UI from Amadeus Folego
  • Add PortAudio backend (compile time option, audio only)
  • Set Jack port order metadata
  • Allow Jack client name to be set from command line (thanks Adam Avramov)
  • Add command prompt to console version for changing controls
  • Add option to print plugin trace messages
  • Print colorful log if output is a terminal
  • Exit on Jack shutdown (patch from Robin Gareus)
  • Report Jack latency (patch from Robin Gareus)
  • Exit GUI versions on interrupt
  • Fix semaphore correctness issues
  • Use moc-qt4 if present for systems with multiple Qt versions
  • Add Qt5 version

by drobilla at January 04, 2017 05:24 PM

Lilv 0.24.2

lilv 0.24.2 has been released. Lilv is a C library to make the use of LV2 plugins as simple as possible for applications. For more information, see http://drobilla.net/software/lilv.

Changes:

  • Fix saving state to paths that contain URI delimiters (#, ?, etc)
  • Fix comparison of restored states with paths

by drobilla at January 04, 2017 04:48 PM

January 03, 2017

Linux – CDM Create Digital Music

New tools for free sound powerhouse Pd make it worth a new look

Pure Data, the free and open source cousin of Max, can still learn some new tricks. And that’s important – because there’s nothing that does quite what it does, with a free, visual desktop interface, permissive license, and embeddable and mobile versions integrated with other software, free and commercial alike. A community of some of its most dedicated developers and artists met late last year in the NYC area. What transpired offers a glimpse of how this twenty-year-old program might enter a new chapter – and some nice tools you can use right now.

To walk us through, attendee Max Neupert worked with the Pdcon community to contribute this collaborative report.

pdprogram

3dglasses

For many participants, it was an epiphany of sorts. Finally, they met the people face-to-face whom they only knew by a nickname or acronym from the [Pd] forum or the infamous mailinglist.

In 2016, we’ve finally seen a new edition of the semi-regular Pure Data Convention. Co-hosted by Stevens Institute of Technology in Hoboken, NJ and New York University in Manhattan, the event packed six days with workshops, concerts, exhibitions, and peer-reviewed paper/poster presentations.

Pure Data, (or Pd for short) is an ever-growing patcher programming language for audio, video and interaction. With 20 years under its belt, Pure Data is not the newest kid in town, but it’s built specifically with the idea of preservation in mind. It’s also the younger sibling of Max/MSP, with a more bare-bones look, but open-source and with a permissive BSD license.

Since the advent of libpd, Pure Data has been embedded in many apps where it serves as an audio engine. At the Pd convention, Dan Wilcox, Peter Brinkmann, and Tal Kirshboim presented a look back on six years with libpd. Chris McCormick’s PdDroidParty, Dan Wilcox’ PdParty, and Daniel Iglesia’s MobMuPlat are building on libpd and simplifying the process of running a Pd patch on a mobile device. For the Pd convention, they joined forces and gave a workshop together.

What was the most exciting part of the Convention? That answer will be different depending on who you ask. For the electronic music producer, it might have been Peter Brinkmann’s presentation of Ableton Link for Pure Data, allowing synchronization and latency compensation with Live.

Previously on CDM: Free jazz – how to use Ableton Link sync with Pure Data patches

For the Max/MSP convert, it might be the effort to implement objects from Max as externals for Pd. That library of objects, aptly named cyclone [after Cycling ’74], has been around for a while, but has now seen a major update by Alexandre Porres, Derek Kwan, and Matthew Barber.

Cyclone: A set of Pure Data objects cloned from Max/MSP [GitHub]

Hint: not all patches look as messy as this. The insanity of the Ninja Jamm patch.

Hint: not all patches look as messy as this. The insanity of the Ninja Jamm patch.

If you’re a musicology nerd, Reiner Kraemer, et al. might have been grabbed you with their analysis tools for (not only) Renaissance music.

https://github.com/ELVIS-Project/VIS-for-Pd

Or what about the effort to extend Frank Barknecht’s idea of list-abstractions to arrays by Matthew Barber, or Eric Lyons’ powerful FFTease and LyonPotpourri tools?

Pd already comes in many flavors. Amongst them are terrific variants, like Pd-L2Ork and its development branch Purr-Data [not a typo], which were presented at the convention by Ivica Bukvic and Jonathan Wilkes. Purr-Data is a glimpse into the possible future of Pd: its interface is rendered as a SVG instead of Tcl/Tk. Ed.: Layman’s terms – you have a UI that’s modern and sleek and flexible, not fugly and rigid like the one you probably know.

Pd compiles for different processors and platforms. This is getting complex, and it’s important to make sure internal objects and externals are acting the way they were intended to across these variants. IOhannes M zmölnig’s research about “fully automated object testing” takes care of that. With double precision and 64bit builds an essential stepping stone to make sure Pd is staying solid. IOhannes is also the only member of the community who attended all four Pd conventions since Graz in 2004.

Katja Vetter moderated the open workbench sessions, where “the compilers” discussed development and maintenance of Pd. She also performed as “Instant Decomposer” in an incredible witty, poetic and musically impressive one-woman act.

Katja always makes an impression with her outfit.

Katja always makes an impression with her outfit.

Cyborg Onyx Ashanti.

Cyborg Onyx Ashanti.

The concerts in the evening program were a demonstration of the variety and quality of the Pure Data scene, from electroacoustic music, interface based experiments, mobile and laptop orchestras to the night of algo-rave.

The participants of the 2016 Pure Data Convention. Organizers Jaime Oliver (leftmost), Sofy Yuditskaya (somewhat in the middle) and Ricky Graham (too busy to pose on the picture).

The participants of the 2016 Pure Data Convention. Organizers Jaime Oliver (leftmost), Sofy Yuditskaya (somewhat in the middle) and Ricky Graham (too busy to pose on the picture).

With all excellent electronic means to keep a community running, the conventions stay an important way to grow the human connections between its members, and get to things done. We are looking forward to the next gathering in 2018 — this time it might be in Athens, I’ve overheard. Ed.: If anyone wants to join for an interim meeting in 2017, I’m game to use the power of CDM to help make that happen!

Performances

Watch some of the unique, experimental performances featured during the conference (many more are online):

Video archive:
http://www.nyu-waverlylabs.org/pdcon16/concerts/

More resources

Some useful stuff found during our Telegram chat:
Loads of abstractions and useful things by William Brent

Ed Kelly’s software and abstractions, including some rather useful tools; Ed developed Ninja Tune iOS/Android remix app Ninja Jamm‘s original Pd patch

Full program:
http://www.nyu-waverlylabs.org/pdcon16/program/

Chat on Telegram about Pd (useful free chat client, Telegram):
https://telegram.me/puredata

Places to share patches:
http://www.pdpatchrepo.info/
http://patchstorage.com/

The post New tools for free sound powerhouse Pd make it worth a new look appeared first on CDM Create Digital Music.

by Peter Kirn at January 03, 2017 11:53 PM

January 01, 2017

digital audio hacks – Hackaday

Circuit Bent CD Player Is Glitch Heaven

Circuit bending is the art of creatively short circuiting low voltage hardware to create interesting and unexpected results. It’s generally applied to things like Furbys, old Casio keyboards, or early consoles to create audio and video glitches for artistic effect. It’s often practiced with a random approach, but by bringing in a little knowledge, you can get astounding results. [r20029] decided to apply her knowledge of CD players and RAM to create this glitched out Sony Discman.

Portable CD players face the difficult problem of vibration and shocks causing the laser to skip tracks on the disc, leading to annoying stutters in audio playback. To get around this, better models feature a RAM chip acting as a buffer that allows the player to read ahead. The audio is played from the RAM, giving the laser time to find its track again and refill the buffer when shocks occur. As long as the laser can get back on track fast enough before the buffer runs out, the listener won’t hear any audible disturbances.

[r20029] soldered wires to the leads of the RAM chip, and broke everything out into banana jacks to create a patch bay for experimenting. By shorting the various leads of the chip, this allows both data and addressing of the RAM to be manipulated. This can lead to audio samples being played back out of sync, samples being mashed up with addresses, and all manner of other weird combinations. This jumbled, disordered playback of damaged samples is what creates the glitchy sounds desired. [r20029] notes that certain connections on the patchbay will cause playback to freeze. Turning the anti-skip feature off and back on will allow playback to resume.

The write up highlights the basic methodology of the hack if you wish to replicate it – simply find the anti-skip RAM on your own CD player by looking for address lines, and break out the pins to a patch bay yourself. This should be possible on most modern CD players with antiskip functionality; it would be interesting to see it in action on a model that can also play back MP3 files from a data CD.

Circuit bending is a fun and safe way to get into electronics, and you can learn a lot along the way. Check out our Intro to Circuit Bending to get yourself started.


Filed under: digital audio hacks, portable audio hacks

by Lewin Day at January 01, 2017 09:00 PM

December 31, 2016

The Penguin Producer

Blender for the 80s: Outlined Silhouettes

Having a landscape is nice and all, but what’s the point if there isn’t anything on the landscape?  In this article, we will populate the landscape with black objects containing bright neon silhouettes.   For this tutorial, we’ll place some silhouettes in our composition.  I will assume you’ve read the …

by Lampros Liontos at December 31, 2016 07:00 AM

December 29, 2016

aubio

Sonic Runway takes aubio to the Playa

We just learned that aubio was at Burning Man this year, thanks to the amazing work of Rob Jensen and his friends on the Sonic Runway installation.

Sonic Runway

Sonic Runway — photo by George Krieger

Burning Man is an annual gathering that takes place in the middle of a vast desert in Nevada. For its 30th edition, about 70,000 people attended the festival this year.

Sonic Runway

Sonic Runway — photo by Jareb Mechaber

The idea behind Sonic Runway is to visualise the speed of sound by building a 300 meter (1000 feet) long corridor, materialized by 32 gates of colored lights.

Each of the gates would illuminate at the exact moment the sound, emitted from one end of the runway, reaches them.

The light patterns were created on the fly, using aubio to analyze the sound in real time and have the LED lights flash in sync with the music.

To cover the significant cost of hardware, the whole installation was funded by dozens of backers in a successful crowd-funding campaign.

read more after the break...

December 29, 2016 01:45 PM

December 24, 2016

digital audio hacks – Hackaday

Lo-Fi Greeting Card Sampler

We’re all familiar with record-your-own-message greeting cards. Generally they’re little more than a cute gimmick for a friend’s birthday, but [dögenigt] saw that these cards had more potential.

After sourcing a couple of cheap modules from eBay, the first order of business was to replace the watch batteries with a DC power supply. Following the art of circuit bending, he then set about probing contacts on the board. Looking to control the pitch of the recorded message, [dögenigt] found two pads that when touched, changed the speed of playback. Wiring these two points to the ears of a potentiometer allowed the pitch to be varied continously. Not yet satisfied, [dögenigt] wanted to enable looped playback, and found a pin that went low when the message was finished playing. Wiring this back to the play button allowed the recording to loop continuously.

[dögenigt] now has a neat little sampler on his hands for less than $10 in parts. To top it off, he housed it all in a sweet 70s intercom enclosure, using the Call button to activate recording, and even made it light sensitive with an LDR.

We’ve seen a few interesting circuit bends over the years – check out this digitally bent Roland TR-626 or this classic hacked Furby.

Check out the video under the break.


Filed under: digital audio hacks, musical hacks

by Lewin Day at December 24, 2016 09:01 AM

The Penguin Producer

Blender for the 80s: The Starry Sky

When dealing with wireframe landscapes, you usually also see a starry sky, so let’s see if we can add a starfield in Blender.   A Note about Scenes and Layers Before we begin, however, we need to discuss “Scenes” and “Render Layers.” About Scenes A scene is a group of …

by Lampros Liontos at December 24, 2016 07:00 AM

December 22, 2016

ardour

How (not) to provide useful user feedback, Lesson 123a

Finding a piece of software not to your liking or not capable of what you need is just fine (expected, almost).

And then there's this sort of thing.

Can you make it user friendly? Fucking ridiculous. I use Sonar,plug in my dongle/breakout box,and it just works. One setting change for in and out for the duo or quad capture. No one in the business has anything good to say about Ardour,if they've even heard of it. I'm not trying to be rode. It's a suggestion. Make it user friendly.

To our friends at Cakewalk: you're welcome.

by paul at December 22, 2016 11:05 AM

digital audio hacks – Hackaday

An Eye-Catching Raspberry Pi Smart Speaker

[curcuz]’s BoomBeastic mini is a Raspberry Pi based smart connected speaker. But don’t dis it as just another media center kind of project. His blog post is more of a How-To guide on setting up container software, enabling OTA updates and such, and can be a good learning project for some. Besides, the design is quite elegant and nice.

The hardware is simple. There’s the Raspberry-Pi — he’s got instructions on making it work with the Pi2, Pi2+, Pi3 or the Pi0. Since the Pi’s have limited audio capabilities, he’s using a DAC, the Adafruit I2S 3W Class D Amplifier Breakout for the MAX98357A, to drive the Speaker. The I2S used by that part is Inter-IC Sound — a 3 wire peer to peer audio bus — and not to be confused with I2C. For some basic visual feedback, he’s added an 8×8 LED matrix with I2C interface. A Speaker rounds out the BoM. The enclosure is inspired by the Pimoroni PiBow which is a stack of laser cut MDF sheets. The case design went through four iterations, but the final result looks very polished.

On the software side, the project uses Mopidy — a Python application that runs in a terminal or in the background on devices that have network connectivity and audio output. Out of the box, it is an MPD and HTTP server. Additional front-ends for controlling Mopidy can be installed from extensions, enabling Spotify, Soundcloud and Google Music support, for example. To allow over-the-air programming, [curcuz] is using resin.io which helps streamline management of devices that are hard to reach physically. The whole thing is containerized using Docker. Additional instructions on setting up all of the software and libraries are posted on his blog post, and the code is hosted on GitHub.

There’s a couple of “To-Do’s” on his list which would make this even more interesting. Synced audio being one: in a multi-device environment, have the possibility to sync them and reproduce the same audio. The other would be to add an Emoji and Equalizer display mode for the LED matrix. Let [curcuz] know if you have any suggestions.


Filed under: digital audio hacks, Raspberry Pi

by Anool Mahidharia at December 22, 2016 12:00 AM

December 21, 2016

digital audio hacks – Hackaday

I Think I Failed. Yes, I Failed.

Down the rabbit hole you go.

In my particular case I am testing a new output matching transformer design for an audio preamplifier and using one of my go to driver circuit designs. Very stable, and very reliable. Wack it together and off you go to test and measurement land without a care in the world. This particular transformer is designed to be driven with a  class A amplifier operating at 48 volts in a pro audio setting where you turn the knobs with your pinky in the air sort of thing. Extra points if you can find some sort of long out of production parts to throw in there for audiophile cred, and I want some of that.

Lets use some cool retro transistors! I merrily go along for hours designing away. Carefully balancing the current of the long tailed pair input. Picking just the right collector power resistor and capacitor value to drive the transformer. Calculating the negative feedback circuit for proper low frequency cutoff and high frequency stability, and into the breadboard the parts go — jumper clips, meter probes, and test leads abound — a truly joyful event.

All of the voltages check out, frequency response is what you would expect, and a slight tweak to the feedback look brought everything right into happiness. Time to fire up the trusty old HP 334A Distortion Analyzer. Those old machines require you to calibrate the input circuit and the volt meter, tune a filter to the fundamental frequency you are applying to the device under test and step down to lower and lower orders of distortion levels until the meter happily sits somewhere in the middle of a range.

Most modern circuits in even cheap products just go right down to sub .1% total harmonic distortion levels without even a thought and I expected this to be much the same. The look of horror must have been pronounced on my face when the distortion level of my precious circuit was something more akin to a clock radio! A frantic search began. Was it a bad jumper, or a dirty lead in the breadboard, or an unseated component? Was my function generator in some state of disrepair? Is the Stephen King story Maximum Overdrive coming true and my bench is going to eat me alive? All distinct possibilities in this state of panic.

After a little break, as the panic and need to find an exact singular problem began to fade I realized something. It was doing exactly what it was supposed to be doing.

The input part of choice in this case is a mostly forgotten 60’s Hitachi PNP silicon part 2SA565 in a (here comes the audiophile cred as we speak) TO-1 package with the long leads so perfect for point to point assembly. (More on this aspect for another time.) After all, these parts adorned the audio stages of countless Japanese radios and such. A PNP small signal BJT is as good as any right? Also these surplus store caps and resistors  are perfectly good. They all measure out ‘good’ on the meter after all. These jumper leads and meter probes are Pomona. Best you can get. No worries there. And on and on the excuses and rationalizations come.

By this point no amount of optimism or delusion could really help. The grown up hiding inside my head spoke up and the truth was obvious. How could a pile of old noisy parts and wiring more like spaghetti than a proper electronic device do any better? I am trying to reach orbit with a bottle rocket of my own design. I lost perspective. So eager to test my new widget that I completely neglected to take good scientific practices into account on the faith that previous experience could guide me through lack of proper setup and experimental control. Just the crosstalk on those jumpers and probes could account for this problem, not to mention noisy out of spec old parts.

It literally could be impossible to ever find all of the possible causes. I built failure in from the start just for the sake of having something that used parts nutcase audiophiles would find more visually appealing. I better go find out where I lost my integrity on this one. Perhaps I set it down with my wallet and keys when I got home from work today. I think I will go clean my bench and layout a PCB with new modern components so I can actually get this test done.

This is a standard long tail pair input circuit used in most linear audio designs. A very handy thing to be familiar with as it is extremely linear and adaptable. Shown here in its standard audio configuration including high frequency shelving for high frequency stability and low pass filter for DC drift reduction.


Filed under: digital audio hacks, Featured, Original Art

by Charles Alexanian at December 21, 2016 06:00 PM

Libre Music Production - Articles, Tutorials and News

LSP Plugins anniversary release 1.0.18

LSP Plugins anniversary release 1.0.18

Vladimir Sadovnikov has just released version 1.0.18 of his audio plugin suite, LSP plugins. This release celebrates one year since LSP plugins 1.0.0 release. All LSP plugins are available in LADSPA, LV2, LinuxVST and standalone JACK formats.

This release includes the following new plugins and changes -

by Conor at December 21, 2016 12:56 PM

MOD Devices Blog

MODx meetup

Greetings, music lovers!

Things have been getting exciting at MOD headquarters in Berlin lately - we’ve been hard at work getting MOD Duos into the hands of musicians all around the world, and also hard at work implementing new features, fixes & effects for the musicians who are jamming on their MOD Duos already. In amongst all that we got to meet with some of you at our first MODx user community meetup, right here in Berlin!

Hosted at Neukölln’s amazing music tech co-working, studio & event space Noize Fabrik and organised/promoted with our friends Musik Hackspace Berlin, we held a free workshop on creative effects processing with over 30 musicians from a range of different backgrounds - from the cellists & guitarists to the MPC beat producers & Max MSP nerds, it seemed like there was a little bit of everything, so some very interesting jam sessions ensued.

Since we were experimenting with stuff like re-ordering effects & processors, trying the same effects at different points in a signal chain, and sidechaining or otherwise combining different sound sources or using one sound to modulate another, the Duo’s web browser GUI made it super easy to demonstrate the principals we were implementing and hear the changes immediately, and it was great to see all of the different effects setups being dreamed up & implemented by such a varied group of musicians. I have a background in organising & hosting music technology hackathons, and seeing the stuff that happens when a group of artists, musicians & technologists are gathered in a room together with the right technology in their hands never ceases to amaze me, and I had a realisation while surrounded by a group of people creating amazing sounds with the Duo: Much like with the Arduino and Raspberry Pi, the most powerful part of the MOD Duo is not the hardware or the software - it is the community of people who use and develop for it. Open source devices for producing or processing sounds are now in the hands of more musicians than ever before, and by sharing our creations, learning and collaborating with each other we become part of an ecosystem that fosters creativity and equips each of us with more tools to realise our musical ideas.

Anyone who knows me outside of what I’ve been up to at MOD may know already that I am a huge nerd for Cycling74’s Max/MSP, and one thing we’ve been working on at MOD lately is the ability to compile Gen~ code from inside Max into LV2 plugins that can be used on the Duo! The exciting possibilities offered by this integration encouraged me to develop my own skills, and saw me delving further into DSP design than I’ve ever gone before, resulting in the creation of my first LV2 plugin. I can’t wait to see where this new rabbit-hole goes…

We’ll be looking at how to make our own plugins from Max MSP Gen~ code in a workshop at our next MODx meetup, when MOD’s in-house Linux audio guru FalkTX will show us the ropes. For anyone with a bit of existing knowledge on Max MSP it’ll be a great way to see how you can expand your existing skillset with the ability to create VST and LV2 plugins without having to learn a whole new development platform, and if you’re a MOD Duo user it’ll equip you with the knowledge you need to take the creative freedom you’ve found in Max and put it under your feet with the Duo. I envisage some mind-bending Max-infused guitar effects featuring at my next gig…

So stay tuned for more info about the next MODx meetup in Berlin, we would love to see you there! If you can’t make it, never fear - The MOD community is everywhere! Why not come and say hi on our forum or join the MOD Duo user group on Facebook? The conversations we have with our community at events as well as via the forum or social media shape the way we work at MOD. If you have dream features or ideas, requests for future development, or need some help with something you’ve developed yourself then we would love to hear from you or see you at one of our future events.

Oh and finally, if you’ve not joined us yet, what are you waiting for? This is the stomp box revolution. Musicians of all kinds, empowering ourselves with open source technology to change the way we play and perform music. Buy a MOD Duo at moddevices.com and join the revolution today.

That’s all from me. Happy holidays from the team at MOD, you’ll hear more from us after Christmas and in the meantime keep making music, keep loving live & keep enjoying your MOD Duo!

  • Adam @ MOD HQ

December 21, 2016 05:20 AM

December 20, 2016

open-source – CDM Create Digital Music

Spaceship Delay is an insane free plug-in inspired by hardware

Spaceship Delay is a free modeling plug-in for Mac and Windows with some wild effects. And it’s made possible partly thanks to the openness of hardware from KORG (and us). The plug-in itself you shouldn’t miss, and if you’re interested in how it’s made, there’s a story there, too.

First, the plug-in — it’s really cool, and really out there, not so much a tame modeling effect as a crazy bundle of extreme sonic possibilities. In fact, it’s as much a multi-effects processor as it is a delay.

Here it is in action, just quickly applying some of the sounds to a drum loop (and making use of its “German/Canadian” MeeBlip filter model):

There are tons of extras packed in there, and the unruly quality of it to me is part of the appeal. (I’m planning on making something with this one, absolutely.)

You get three delay modes: single, ping pong, and dual/stereo, plus:

  • Delay time, time subdivisions, tap tempo, sync
  • Feedback
  • Modulation
  • Attack control for triggering via dynamic
  • Modeled “spring” tape reverb based on the Dynacord Echocord Super 76
  • Bitcrusher
  • Tube preamp
  • Vintage phaser
  • Modeled synth filters from the KORG MS-20 and MeeBlip anode/triode
  • Monotron delay-inspired delay
  • Freeze switch (opening up use as a looper)
  • Loads of presets

Plus there’s extensive online help to assist you in navigating all these choices. And I totally read it. Really. No, okay, I didn’t, I just played with the knobs. But I did have a look, and it looks nice.

VST, AU, AAX formats
32-bit, 64-bit
macOS, Windows

There are a lot of possibilities here, from subtle to experimental, useful for pretty much anything from drums to vocals to synth to guitar.

But the story behind the modeling is also fascinating. Creator Dr. Ivan Cohen has delved deep into the theory of modeling, and has been writing about the process on his blog. It’s definitely of interest to developers, but makes a good read for anyone curious about vintage and new hardware and different designs of filters and the like. (No doctorate in DSP required.)

anodeangle

Open designs have long been a part of the history of electronic music technology. In the analog days, it was fairly typical to publish circuit designs. These were ostensibly for purposes of repair, but naturally people read and learned from them and produce modified versions of their own. Then along came digital tech, and much of the creativity of the business disappeared into the black boxes of chips – not only to protect intellectual property, but because of the nature of the chips themselves.

Now, we’ve come full circle. Researchers discuss design and modeling in academic circles. And fueled by online communities interested in hacking and the open source movement, hardware makers increasingly share their designs. That’s included KORG publishing MS-20 filter circuits and encouraging modifications, and of course our own MeeBlip project.

What’s cool is that Ivan has used that openness to learn from these designs and try his own implementations, all in a context we never envisioned. So you can apply something inspired by the MeeBlip and Korg filters in a new digital environment.

The vintage Super 76 inspired the tape delay model - and is reason alone to take a look at this plug-in.

The vintage Super 76 inspired the tape delay model – and is reason alone to take a look at this plug-in.

Not all of the modeling is perfect yet. But that’s fun, too, as you get some weird and unexpected effects.

Here’s his story on the original version:

https://musicalentropy.github.io/Spaceship-Delay/

And an in-depth discussion of why he used these filters and what inspired him:

The filters in Spaceship Delay

Grab the plug-in here, and consider casting your vote in the KVR Developer Challenge:

http://www.kvraudio.com/product/spaceship-delay-by-musical-entropy/details

Totally free as in beer.

The post Spaceship Delay is an insane free plug-in inspired by hardware appeared first on CDM Create Digital Music.

by Peter Kirn at December 20, 2016 05:53 PM

December 19, 2016

Linux – CDM Create Digital Music

Here’s how MOTU says they’re improving latency on their new interfaces

You’d be forgiven for not noticing, but the top audio interfaces are one of the things that have been steadily getting better. That is, the handful of makers really focused on service musicians (and other audio and audiovisual applications) have improved interface quality, added a lot of features and connectivity, and improved driver performance.

MOTU is one of those makers on a short list that I hear good experiences with. But this fall when a press release crossed my desk saying they had more low latency performance, I wanted a bit more detail than the marketing language was offering. So I spoke to MOTU’s Jim Cooper to clarify a bit.

I know a lot of MOTU boxes are out there in the wild among our CDM readers, so I’d love to hear from those of you using them. (And I don’t want to just favor one vendor – I’d be happy to repeat this conversation with others, as these are the sort of chats I get to have with manufacturers, and it’s nice to be able to share them.)

TL/DR version: MOTU will give you lower round-trip latency on their latest boxes.

Also, some quick notes about what makes the UltraLite mk4 nice:

  • iOS, Linux. It does now do USB class compliant operation, so you can use it with iOS (or even Linux, in fact, even though MOTU don’t mention that).
  • Browser mixing. You can access a 48-channel mixer in your Web browser, meaning this does double duty as a mixer – and your computer becomes the interface.
  • Any input, any output. You can route signals in a customizable router, so any input can go to any output.
  • Quality! MOTU has put in what they say are “super high quality” converters; certainly, my research says you should have some good results

CDM: Can you go into some detail on the new low latency drivers for the UltraLite?

Sure! Our new low-latency drivers were years in development. These drivers (and the firmware in the hardware, too) are still actively tweeked and optimized, and we regularly release driver updates to further improve performance.

Which hardware is supported? I know MOTU has an integrated driver model, so that means you should see these benefits across the line?

The low latency drivers for the UltraLite-mk4 are for all audio interfaces in our new generation “Pro Audio” family. This covers the latest releases of UltraLite-mk4, the new 624 and 8A interfaces we announced last week, and all MOTU AVB/TSN capable hardware (UltraLite AVB, 1248, 16A, 8M etc.)

What did you do from a technical standpoint to make this work?

The short answer is…we started from scratch, spent a lot of time optimizing, looking at profilers, and optimizing some more. We have learned a lot from our 20 years of writing audio drivers and making audio interfaces. Starting from scratch meant that we could fully capitalize on those lessons learned. At the same time, operating systems have improved along with computer hardware. We can now count on machines having multiple cores and supporting Intel-intrinsic (SSE) operations, which helps a lot.

Okay, this is the one I’m most keen to know: how does performance compare on Windows versus macOS?

It depends on the machine and the software being used. Let’s assume most people have a decent, healthy computer and that we’re talking about USB.

For latency performance, we expect both platforms to perform well. Both should be able to do under 3 ms patch-thru or better. That’s like having your head about three feet further from an audio source.

For CPU performance, it’s mostly negligible on both platforms. The lower your buffer size, the more CPU we use, which has always been the case. In windows this is generally more true, so there will be a minor difference between platforms.

We want to mention that when connected via Thunderbolt, performance is a little better (for both Mac and Windows). Thunderbolt is also slightly more efficient with regard to CPU usage. But the main point is, with these new drivers, USB holds up remarkably well in comparison to Thunderbolt, given common industry perceptions.

Yeah, I’m currently spec’ing out PCs with Thunderbolt on. There have been some under-the-hood improvements I know to Windows audio lately. Any you would comment on, or that have implications for your projects?

Which improvements are you referring to? Since Vista they’ve had the MMCSS API, which gives DAWs a way to prioritize audio threads over most of the system, which really helps. That helps ASIO drivers quite a bit, too. Kernel drivers still have the limitations of poor timer accuracy and DPC scheduling, which make it more difficult to deliver audio buffers. But we have found ways to address those issues and deliver extremely solid performance.

Ed.: Well, we’re a bit behind, honestly, in tracking Windows changes. I hope to remedy that soon, though if you found Vista annoying and PC hardware options lacking then, some of the changes we reported on long ago made to Vista are now also in an OS that’s friendlier and more mature, and I think PC hardware has improved, too. I know there have been some other efforts on Windows audio that we need to keep up to date. And meanwhile on the Mac side, Sierra has fixed some things, too.

What should users of the UltraLite mk4 expect in real world usage?

A generational improvement in both the driver performance and the overall features and performance of the hardware. On today’s absolute fastest computers, we can achieve full, round-trip monitoring with RTL as low as 1.6 ms with a 32 sample buffer setting at 96 kHz. If you’re running a bunch of effects and tracks, then it’s probably a good idea to bump that up a bit. But even on a good machine (like what most of us have), you can easily achieve 3-4 ms RTL under most practical situations these days.

Thanks, Jim.

624-front-and-rear

Okay, so you can add low latency features to the other stuff that’s nice on the UltraLite.

Meanwhile, MOTU’s 624 and 8A are shipping now. Interestingly, they include both USB3 and Thunderbolt. So if you need a mobile interface to swap between machines and not all of them have Thunderbolt, especially on Windows, you’ve got options. I would note that Thunderbolt is spreading fast on the PC, though.

The big deal with the 624 and 8A is that you get 32-34 channels of audio I/O, the ESS Sabre32 DACs with 132 dB dynamic range, and networked capabilities via AVB. I’m guessing AVB isn’t so relevant to most CDM readers, but for those of you needing to combine audio across computers and interfaces, it’s hugely powerful.

And like the other recent interfaces including the UltraLite, you get standalone mixing functionality you can access via any Web browser (even on mobile) on a WiFi network.

There’s also a suite of analysis tools with FFT, oscilloscopes, and visual analyzers.

The AVB stuff on the flagship offerings were nice, but I suspect these could be even bigger – well under a grand, and with I/O that fits a lot of needs.

More:
http://motu.com/

The post Here’s how MOTU says they’re improving latency on their new interfaces appeared first on CDM Create Digital Music.

by Peter Kirn at December 19, 2016 04:17 PM

digital audio hacks – Hackaday

Tape-Head Robot Listens to the Floor

We were just starting to wonder exactly what we’re going to do with our old collection of cassette tapes, and then along comes art robotics to the rescue!

Russian tech artist [::vtol::] came up with another unique device to make us smile. This time, it’s a small remote-controlled, two-wheeled robot. It could almost be a line follower, but instead of detecting the cassette tapes that criss-cross over the floor, it plays whatever it passes by, using two spring-mounted tape heads. Check it out in action in the video below.

Some of the tapes are audiobooks by sci-fi author [Stanislaw Lem] (whom we recommend!), while others are just found tapes. Want to find out what’s on them? Just drive.

We’ve featured [::vtol::]’s work before, which ranges from the conceptual, like this piece that broadcasts poetry in successive BSSIDs from what amounts to a cultured WiFi throwie, to the beautiful, like this visualization of brainwaves using ferrofluid and antifreeze.


Filed under: digital audio hacks, misc hacks

by Elliot Williams at December 19, 2016 06:01 AM

December 18, 2016

Scores of Beauty

LilyPond’s Freedom

Oops, I have to plead guilty for some vanity-Googling my name in combination with “LilyPond”. OK, this is embarassing, but on the bright side it revealed some blog posts I didn’t know yet. And there’s one in particular that I want to recommend today because it’s a post that actually should have appeared here a long time ago (and my mention is actually very minor). Joshua Nichols wrote a very interesting piece on software freedom, which I suggest to read here: https://joshdnichols.com/2015/11/16/why-i-love-lilypond-freedom/

by Urs Liska at December 18, 2016 01:50 PM

December 17, 2016

The Penguin Producer

Blender for the 80s: Wireframe Mesh

In this article, we talk about the wireframe mesh used as the ground and mountains of many pieces of science fiction art in the 80s. One of the foundations of 80s art was based in computer graphics.  Computers of the time were not very beefy; it was not possible to …

by Lampros Liontos at December 17, 2016 07:00 AM

December 16, 2016

open-source – CDM Create Digital Music

FEEDBOXES are autonomous sound toys that play along with you

We live in an age when we can jam along with machines as well as with humans. And maybe it’s about time that they fed us some clever grooves instead of, you know, fake news and stuff.

Our friend Krzysztof Cybulski of Warsaw, PL’s panGenerator shares his FEEDBOXES. They’re “autonomous” sound objects, capable of responding to audio inputs with perpetually-transforming responses.

It’s all thanks to elegant use of feedback loops – meaning you can toy with these techniques yourself.

Now that’s a better kind of echo chamber.

It also makes use of the awesome, free PdDroidParty by Chris Mccormick, which in turn is based on the free libpd library and Pure Data.

It’s not the first time Krzysztof has built instruments around feedback, messing about in the panGenerator workshop for the joy of it. See his feedback synth, too:

It’s worth checking out all panGenerator are doing; they’re really one of the smartest and most imaginative interaction design shops in the moment, and representative of Poland’s brainpower at its finest.

https://www.facebook.com/pangenerator/

The post FEEDBOXES are autonomous sound toys that play along with you appeared first on CDM Create Digital Music.

by Peter Kirn at December 16, 2016 06:02 PM

December 11, 2016

The Penguin Producer

Discussion: 80s Design

I am in my 40s.  I had my childhood and adolescence centered in the 1980s, and as a result, I have a liking for the art from that period.  Additionally, the artwork from that period is both visually stimulating and simple at the same time, allowing them to be the …

by Lampros Liontos at December 11, 2016 04:57 PM

December 09, 2016

Linux – CDM Create Digital Music

Ableton or FL Studio or Bitwig, Maschine Jam integrates with everything

First, there was software – and mapping it manually to controllers. Then, there was integrated hardware made for specific software – but you practically needed a different device for each tool. Maschine Jam is a third wave: it’s deeply integrated with software workflows, but it can swap from one tool to another without having to change how you work.

That’s possible because Maschine Jam is focused on some fairly specific workflows as far as triggering patterns, creating melodies and rhythms, and controlling parameters. The “jam” part is really focused on live control. So it’s not quite about deep sample editing and studio production like Ableton Push or Maschine Studio, but it is then adaptable to lots of other contexts.

In short, even if you keep your beloved Push in the studio, Maschine Jam wants to be the lightweight live gigging controller you toss in your backpack.

And it doesn’t necessarily force you to choose a particular tool. Even if you never touch Maschine, it’s now a reasonable controller for Ableton Live, FL Studio, and Bitwig Studio in its own right. And significantly, if you do use Maschine, you can now switch between working with Maschine and your DAW of choice, and the control mappings stay the same. (Of course, that may make you decide you want two Jams, but you get the picture.)

I was already impressed by Maschine Jam’s Ableton Live integration. It’s not a Push, mind – there’s no velocity sensitivity, and you will sometimes miss the availability of displays on the hardware. (That means looking at the computer screen, which is part of what these controllers could free you from.) But it’s also lighter, boasts integrated touch strips for mixing and parameter control, and lots of quick workflow shortcuts that make it really handy playing live. When Gerhard first introduced Push, he talked about it as a way to start tracks. And it remains a powerful hardware window into the production process. But now I find Jam fits the rest of the picture: quick jam sessions and playing live.

Oh yeah, and there’s the price: US$399 street, which of course includes Maschine and all the Komplete 11 Select features. That’s not a bad deal on the hardware controller alone, and it’s a stupidly good deal once you figure in it gives you entry to all the software.

But now a new update deepens the integration with Ableton Live, Max for Live, FL Studio, and Bitwig Studio, too, giving you a range of choices on Mac, Windows, and Linux.

As other controllers attempting to be universal live controllers have faded into the background, Maschine Jam seems to realize the promise. Let’s look at how integration works in each.

Why Maschine, Why Jam?

If I had to show just one feature that explains how Jam is a bit different than Launchpad Push APC grid blah blah more grids blah blah….

Well, it’s this. Maschine’s locking and morphing means that you can experiment with capturing and then transforming different settings. There’s some especially deep possibilities here when you combine it with Reaktor Blocks, synth lovers.

So before we start controlling other software, let’s have a look at that:

Ableton Live, Max for Live

Maschine Jam already works in Ableton Live for clip triggering and (crucially) mixing with fader strips. Clip triggering works exceptionally well, in fact: while NI’s grid lacks velocity sensitivity, the compact pads are ideal for this use case and deliver a responsive ‘snap’ when pressed. Device parameter control is there, too, though you may slightly miss having a screen for knowing which control is which.

Here’s the basic Ableton integration. It’s very, very similar to what you get with Ableton Push – but now you can swap between working this way in Maschine and working this way in Ableton. And honestly, part of the appeal to me of Jam is that it does less – so there’s a limited set of stuff that you get really quick at.

(In the very small tweaks department, the update also adds triplet access, finally.)

Where things get interesting in today’s update is that now you’ve got a dedicated Max for Live template, too. That opens up lots of other clever features – or even locking the Jam to a Max patch whilst another controller does something else.

Now, I know Ableton may be a bit squeamish about this being an Ableton controller that lacks their branding and collaboration. But as a user of Live since version 1, part of the ongoing appeal to me of this tool is its versatility and the ability to use a variety of hardware in different situations. So I do hope the Abletons warm up to what NI have done here.

FL Studio

Intrepid FL Studio users have hacked all sorts of smart ways of playing live over the years. Now, more recent versions of FL are really nicely equipped for live performance.

And FL is really an ideal match for Jam. It has long had step sequencing as an integrated, native feature, and now combines the level of steps/notes with larger clips and patterns.

fl_maschinejam

It’s a really lovely environment. In fact, just … possibly mute the video you’re about to see, because while the music will appeal to someone, it sort of reinforces this idea that FL is just for certain music genres. It’s not. You can do anything you like. And FL’s architecture and efficiency I think are top notch.

MIDI, Logic

You can also use the MIDI template included with Maschine Jam to control software. It’s not nearly as deep as the other examples here, but it is interesting. Here’s an example with Apple Logic Pro:

Bitwig Studio

I’ve sort of saved the best for last. Bitwig benefit from having a new architecture rather than loads of ancient legacy code. And as a result, the environment hardware makers have for compatibility is really ideal.

Native Instruments have partnered with Bitwig directly as I understand it in order to deliver a template with deep integration. The basic mold is what you get from Ableton – control Maschine, switch and control Bitwig, get pattern creation and sequencing and mixing and parameter control in each.

But there are some subtle and important differences here.

maschinejam_bitwig_big

Fine fader control. The best one to me is this one – SHIFT gives you fine-adjustment on the touch strips for more precision, as in Jam.

Note events light up on running patterns.

Bitwig’s onscreen overlay works. That actually gets a bit confusing in Ableton Live, which lacks Maschine’s heads-up display. Actually, it’d be great if Live had this, for Max patchers and custom controllers.

Global swing support. Again, as in Jam. That really adds to the hardware/groove feel of the integration, though.

Switch projects from hardware. You had me at “switch projects.”

Change drum machines using the built-in Bitwig drum machines when sequencing (via SELECT).

SHIFT+SOLO to change pattern length.

And this is definitely the best video, because it comes from Thavius Beck.

More on this from our friends at AskAudio:


Bitwig Studio 1.3.15 Adds Comprehensive Support For Maschine JAM

You’ll want the latest version of Bitwig Studio. This being Bitwig, it’s even ready for Ubuntu.

Bitwig Downloads

The post Ableton or FL Studio or Bitwig, Maschine Jam integrates with everything appeared first on CDM Create Digital Music.

by Peter Kirn at December 09, 2016 06:44 PM

December 08, 2016

OSM podcast

December 07, 2016

open-source – CDM Create Digital Music

PushPull is a crazy futuristic squeezebox instrument you can make

PushPull will blow apart your idea of what a typical controller – or an accordion – might be. It’s a bit like a squeezebox that fell from outer space, coupling bellows with colored lights, sensors, mics, and extra controls. And you can now make one yourself, thanks to copious documentation.

You may have seen the instrument in action in the last couple of years ago – gasping in the dark.

PushPull Balgerei 2014 from 3DMIN on Vimeo.

But with more complete documentation, you get greater insight into how the thing was made – and you could even follow the instructions to make your own.

Things you expect to see: a bellow, valves, keys.

Thing you might not expect: RGB LEDs lighting up the instrument, six capacitive touch sensors, six-direction inertial sensing (for motion), microphones, rotary encoders.

And many of the parts are fabricated via 3D printing. That combines with some more traditional techniques – yes, including cutting, folding, and gluing. It’s all under a permissive Creative Commons attribution license. (That’s a bit scant for open source hardware, actually, in that they might consider some other license, too. But it gets the job done.)

20150607-img_4633

pushpull_20160531-img_8894

20160531-img_8891-till-bovermann

It’s eminently hackable, too, with X-OSC messages sent wirelessly from its sensors, loads of moddable electronics, and recently even integration with Bela, the lovely low-latency embedded platform.

The project is the work of Amelie Hinrichsen, Till Bovermann, and Dominik Hildebrand Marques Lopes, who combine overlapping skills in art, product design, soundmaking, music, industrial engineering, and hardware and software engineering. PushPull itself is part of the innovative 3DMIN instrument design project in Berlin, a multi-organization project.

Check out the instructions for more:

http://3dmin.github.io/

The post PushPull is a crazy futuristic squeezebox instrument you can make appeared first on CDM Create Digital Music.

by Peter Kirn at December 07, 2016 04:22 PM

MOD Devices Blog

MOD Duo 1.2 update now available

Hello again, music lovers! We’ve been having a great time recently at MOD. We hosted our first MODx meetup in Berlin, gathering existing members of the MOD Duo user community as well as the attendees of a Musik Hackspace workshop on the creative application of effects for music production & performance. It was great to see MOD Duos in the hands of so many talented & creative people, who utilised them when testing out the different uses of effects that were discussed during the workshop. We also ended up enjoying some amazing impromptu jams which combined music that spanned many different genres - it was a real treat! We’ll be hosting another MODx meetup in Berlin very soon, so if you want to be notified of future events please join our mailing list at moddevices.com or the MOD Duo User Group on Facebook

We’re also very pleased to announce the release of software update 1.2 for your Duo. Check out some of the amazing new features we’ve added:

  • Favorites
    There are now so many pedals & plugins available for the Duo that it was starting to take some time to find those favorite ones which you re-use in lots of your amazing pedalboard creations. Not any more! You can now mark any plugin as a favorite and have all of those appear in a single category. Mein Lieblings!

  • Tap Tempo
    You can now assign a control to tap tempo! There are now a bunch of pedals in the Delay, Modulator, Spatial & Generator categories which support the new tap tempo feature, and I’m sure more & more will start to integrate this great feature. Auf Tempo!

  • Zeroconf support
    Zeroconf support (also known as “Bonjour”) means you can now connect to your MOD using http://modduo.local instead of using the IP address. Null-Konfiguration!

  • Custom ranges for MIDI CCs
    Have you ever found that you wanted finer control over a smaller range of one of your pedal parameters when using a MIDI controller? Well, worry no longer! You can now set custom ranges when using the MIDI learn function. Benutzerdefinierte!

  • Several minor web interface changes
    You’ll also notice a few changes to the Duo’s web interface. Glänzend und neu!

For the changelog and discussion about the update as well as more detailed information on the features mentioned above, please see this post on the MOD Forum The next time you open the MOD web interface you’ll receive an update notification, and the update process is simple to initiate.

As always, please get in touch if you have any issues, and in the meantime keep making music, keep loving life & keep enjoying your MOD Duo!

“Alles ist SUPER” - Adam @ MOD HQ

December 07, 2016 05:20 AM

December 06, 2016

Pid Eins

Avoiding CVE-2016-8655 with systemd

Avoiding CVE-2016-8655 with systemd

Just a quick note: on recent versions of systemd it is relatively easy to block the vulnerability described in CVE-2016-8655 for individual services.

Since systemd release v211 there's an option RestrictAddressFamilies= for service unit files which takes away the right to create sockets of specific address families for processes of the service. In your unit file, add RestrictAddressFamilies=~AF_PACKET to the [Service] section to make AF_PACKET unavailable to it (i.e. a blacklist), which is sufficient to close the attack path. Safer of course is a whitelist of address families whch you can define by dropping the ~ character from the assignment. Here's a trivial example:

…
[Service]
ExecStart=/usr/bin/mydaemon
RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX
…

This restricts access to socket families, so that the service may access only AF_INET, AF_INET6 or AF_UNIX sockets, which is usually the right, minimal set for most system daemons. (AF_INET is the low-level name for the IPv4 address family, AF_INET6 for the IPv6 address family, and AF_UNIX for local UNIX socket IPC).

Starting with systemd v232 we added RestrictAddressFamilies= to all of systemd's own unit files, always with the minimal set of socket address families appropriate.

With the upcoming v233 release we'll provide a second method for blocking this vulnerability. Using RestrictNamespaces= it is possible to limit which types of Linux namespaces a service may get access to. Use RestrictNamespaces=yes to prohibit access to any kind of namespace, or set RestrictNamespaces=net ipc (or similar) to restrict access to a specific set (in this case: network and IPC namespaces). Given that user namespaces have been a major source of security vulnerabilities in the past months it's probably a good idea to block namespaces on all services which don't need them (which is probably most of them).

Of course, ideally, distributions such as Fedora, as well as upstream developers would turn on the various sandboxing settings systemd provides like these ones by default, since they know best which kind of address families or namespaces a specific daemon needs.

by Lennart Poettering at December 06, 2016 11:00 PM

December 04, 2016

The Penguin Producer

Where’s Walldo?

A lot of what makes professional-looking video is not in the editing; it’s in the recording.  And there are several techniques to shooting video that can help you get more professional-looking footage to put in your project. To make things clearer, let’s start with a video describing the basic shots …

by Lampros Liontos at December 04, 2016 01:57 AM

December 01, 2016

ardour

Ardour 5.5 released

Ardour 5.5 is now available, with a variety of new features and many notable and not-so-notable fixes. Among the notable new features are support for VST 2.4 plugins on OS X, the ability to have MIDI input follow MIDI track selection, support for Steinberg CC121, Avid Artist & Artist Mix Control surfaces, "fanning out" of instrument outputs to new tracks/busses and the often requested ability to do horizontal zoom via vertical dragging on the rulers. There are also the usual always-ongoing improvements to scripting and OSC support.

As in the past, some features including OSX VST support, Instrument Fanout, and Avid Artist support were made possible by sponsorship from Harrison Consoles.

Download  

Read more below ...

read more

by paul at December 01, 2016 11:43 AM

November 30, 2016

GStreamer News

GStreamer 1.10.2 stable release (binaries)

Pre-built binary images of the 1.10.2 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

See /releases/1.10/ for the full list of changes.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

November 30, 2016 05:45 PM

November 29, 2016

GStreamer News

GStreamer 1.10.2 stable release

The GStreamer team is pleased to announce the second bugfix release in the stable 1.10 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.10.0. For a full list of bugfixes see Bugzilla.

See /releases/1.10/ for the full release notes.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

November 29, 2016 02:00 PM

Revamped documentation and gstreamer.com switch-off

The GStreamer project is pleased to announce its new revamped documentation featuring a new design, a new navigation bar, search functionality, source code syntax highlighting as well as new tutorials and documentation about how to use GStreamer on Android, iOS, macOS and Windows.

It now contains the former gstreamer.com SDK tutorials which have kindly been made available by Fluendo & Collabora under a Creative Commons license. The tutorials have been reviewed and updated for GStreamer 1.x. The old gstreamer.com site will be shut down with redirects pointing to the updated tutorials and the official GStreamer website.

Thanks to everyone who helped make this happen.

This is just the beginning. Our goal is to provide a more cohesive documentation experience for our users going forward. To that end, we have converted most of our documentation into markdown format. This should hopefully make it easier for developers and contributors to create new documentation, and to maintain the existing one. There is a lot more work to do, do get in touch if you want to help out. The documentation is maintained in the new gst-docs module.

If you encounter any problems or spot any omissions or outdated content in the new documentation, please file a bug in bugzilla to let us know.

November 29, 2016 12:00 PM

PipeManMusic

Stay In Bed For Christmas

So I've recorded a little Christmas tune for those who are over the hype. I hope you like it. Check it out, share it, buy it, I'd really appreciate it.




by Daniel Worth (noreply@blogger.com) at November 29, 2016 07:05 AM

November 28, 2016

open-source – CDM Create Digital Music

A call for emotion in musical inventions, at Berlin hacklab

Moving beyond stale means of framing questions about musical interface or technological invention, we’ve got a serious case of the feels.

For this year’s installment of the MusicMakers Hacklab we host with CTM Festival in Berlin, we look to the role of emotion in music and performance. And that means we’re calling on not just coders or engineers, not just musicians, and performers, but psychologists and neuroscientists and more, too.

The MusicMakers Hacklab I was lucky enough to found has now been running with multiple hosts and multiple countries, bringing together artists and makers of all stripes to experiment with new performances. The format is this: get everyone together in a room, and insist on people devising new ideas and working collaboratively. Then, over the course of a week, turn those ideas into performances and put those performances in front of an audience.

pkirn_hacklab-005

This year talks and performances we hope will tackle this issue of emotion in some new ways, the embodiment of feeling and mind in the work. It comes hot on the heels of working in Mexico City with arts collective Interspecifics and MUTEK Festival in collaboration with CTM. (Leslie García has been instrumental in collaborating and bringing the event to Mexico.)

The open call to come to Berlin is available for submissions through late Wednesday. If you can make it at the beginning of February, you can soak up all CTM Festival has to offer and make something new.

The theme:

Now that our sense of self is intertwined with technology, what can we say about our relationship with those objects beyond the rational? The phrase “expression” is commonly associated with musical technology, but what is being expressed, and how? In the 2017 Hacklab, participants will explore the irrational and non-rational, the sense of mind as more than simply computer, delving into the deeper frontiers of our own human wetware.

Building on 2016’s venture into the rituals of music technology, we will encourage social and interpersonal dynamics of our musical creations. We invite new ideas about how musical performance and interaction evoke feelings, and how they might realize emotional needs.

I’m really eager to share how we bring music psychology and cognition into the discussion, too, so stay tuned.

And I think that’s part of the point. Skills with code and wires are great, but they’re just part of the picture. Everything you can bring in performance technique, in making stuff, in ideas – this is all part of the technology of music, too. We have to keep pushing beyond our own comfortable skills, keep drawing connections between media, if we want to move forward.

Berlin native Byrke Lou joins us and brings her own background in performance and inter-disciplinary community, which makes me still more excited.

Full description and application form link:

MusicMakers Hacklab:
Emotional Invention. In collaboration with CDM, Native Instruments and the SHAPE Platform.

The post A call for emotion in musical inventions, at Berlin hacklab appeared first on CDM Create Digital Music.

by Peter Kirn at November 28, 2016 08:05 PM

November 26, 2016

The Penguin Producer

Dr. Strangesound, or How I Learned to Stop Worrying, and Love PulseAudio

Anyone who’s done a dive through this site probably knows that I spent a lot of time with a real hate-on for PulseAudio.  It’s not very performant compared to Jackd, nowhere near as flexible, and having two sound servers at the same time does eat up a lot of processing …

by Lampros Liontos at November 26, 2016 07:00 AM

November 24, 2016

blog4

Block4 at Piksel 2016

Block 4 is this year active at the Piksel 2016 festival in Bergen: Malte Steiner is giving a workshop on 24. November:
 http://16.piksel.no/2016/11/24/piksel-pd-meeting/ 

TMS is playing the 25. November:
 http://16.piksel.no/2016/11/25/5-ht_five-levels-to-zero/ 

 and Tina Mariane Krogh Madsen is performing the 26. November: http://16.piksel.no/2016/11/26/body-interfaces-zero-level-elevation/

by herrsteiner (noreply@blogger.com) at November 24, 2016 02:13 AM

November 22, 2016

open-source – CDM Create Digital Music

MeeBlip couldn’t wait for Black Friday, so it’s Red November

The MeeBlip project reaches some important milestones this year – and we get to say thanks, and celebrate with a sale. And, really, why do that for one day called “Black Friday” or “Cyber Monday” or “Arbitrary Discount Saturday Dusk Hour”? Let’s just do it for the whole rest of the month.

MeeBlip quietly turned six years old this month. That’s special in that it marks a collaboration between CDM and creator James Grahame (Blipsonic). But it also means we’ve managed to build a line of end user synthesizers that are free and open source. This isn’t a kit, it isn’t a module, and you don’t have to know or care about code or circuits. It’s ready to play as an instrument. But you’re also investing in hardware whose designs are open and under open licenses.

Sharing knowledge is what built the world of electronic music. So we think you deserve at least some products you can learn from and adapt and make without having to ask permission.

Speaking of which, the other milestone this month is that we’ve posted all those design and code files to our GitHub site. There’s even an update with some tweaks to improve triode (and we’ll upgrade early adopters for the cost of a chip + postage):
https://github.com/MeeBlip/meeblip-triode

But as I said, none of that has to matter. We want the MeeBlip to be for everybody – including people trying synth hardware for the first time.

And so we’ve also got everything on sale for the rest of the month. Red November means:

Free shipping to the USA and Canada. (Affordable shipping worldwide.)

The lowest pricing of the year for everything.

MeeBlip triode, the little synth with an analog filter and big bass sound (and new sub oscillator). $149.95 $129.95

BlipCase, the carrying and performance system for all your little music gear – MeeBlip, volcas, Roland Boutique, and more.
$79.95 $69.95
$229.95 $199.95 bundled with triode

And from our friends at independent Canadian maker iConnectivity, there’s the mio USB to MIDI interface, which adds MIDI to anything for just $29.95 on sale. (It’s an essential accessory for the MeeBlip, volcas, and loads of other synths.)

Shop now at MeeBlip.com, shipped direct

One friendly early adopter sent some shots of how much fits in the BlipCase - OP-1, volca, Roland Boutique TB-03, Kaossilator, Blue Mic, and oh yeah, MeeBlip, of course.

One friendly early adopter sent some shots of how much fits in the BlipCase – OP-1, volca, Roland Boutique TB-03, Kaossilator, Blue Mic, and oh yeah, MeeBlip, of course.

Now, if you do spot Cyber Monday / Black Friday deals, or if you’re collecting them, or offering them, do send them our way! Let’s spread synthesis.

Speaking of – here’s our friend Olivier with yet another wonderful jam:

Shop MeeBlip

The post MeeBlip couldn’t wait for Black Friday, so it’s Red November appeared first on CDM Create Digital Music.

by Peter Kirn at November 22, 2016 11:03 PM

GStreamer News

GStreamer 1.10.1 stable release (binaries)

Pre-built binary images of the 1.10.1 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

See /releases/1.10/ for the full list of changes.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

November 22, 2016 12:10 PM

November 21, 2016

rncbc.org

Qtractor 0.8.0 - The Snobbiest Graviton is out!

Hello there!

This might come as an end to a long cycle indeed, really approaching a final and last milestone, whatever...

But one thing is for sure: besides the prodigal but (pun, somewhat, not intended:)), this wraps up the so called Qstuff* Fall'16 release business deal.

Qtractor 0.8.0 (snobbiest graviton) is out!

And the release highlights are:

  • Auto-backward location marker (NEW)
  • Clip selection edge adjustment (NEW)
  • Improved audio clip zoom-in resolution (NEW)
  • Clip selection resilience (FIX)
  • MIDI (N)RPN running status (FIX)

And the band plays on...

Maybe you can further decrypt the fresh juice from the change-log below -- or rather never mind though and go for the grabs already ;).

Qtractor is an audio/MIDI multi-track sequencer application written in C++ with the Qt framework. Target platform is Linux, where the Jack Audio Connection Kit (JACK) for audio and the Advanced Linux Sound Architecture (ALSA) for MIDI are the main infrastructures to evolve as a fairly-featured Linux desktop audio workstation GUI, specially dedicated to the personal home-studio.

Website:

http://qtractor.sourceforge.net

Project page:

http://sourceforge.net/projects/qtractor

Downloads:

http://sourceforge.net/projects/qtractor/files

Git repos:

http://git.code.sf.net/p/qtractor/code
https://github.com/rncbc/qtractor.git
https://gitlab.com/rncbc/qtractor.git
https://bitbucket.org/rncbc/qtractor.git

Wiki (on-going help wanted!):

http://sourceforge.net/p/qtractor/wiki/

License:

Qtractor is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Change-log:

  • MIDI clip tools redo/undo processing refactored as much to avoid replication over multiple hash-linked clips; MIDI clip editor's floating selection/anchor event stability has been also improved, in regard to MIDI tools processing range.
  • Auto-backward play-head location, when playback was last started, is now shown on main track-view, as a momentary dark-red vertical line marker.
  • LV2 plugin-in parameter optimization: stuff consecutive series of plug-in's parameter value changes, as much as possible into one single undo/redo command.
  • LV2_STATE__StateChanged is now recognized as a regular atom notification event and raising the current session dirty flag, as normal behavior.
  • Adjusting clip selection edges is now possible and honored while on the the main track-view canvas.
  • Audio peak file caching and rendering, as far as audio clip wave-forms are concerned, have been refactored and optimized a couple of notches higher, on the ephemeral and rather marginal throughput front ;).
  • Fixed a potential crash on the singleton/unique application instance setup.
  • Edit/Select Mode tool-buttons moved into single drop-down tool-button on the main and MIDI editor's tool-bar.
  • Do not reset the current clip selection when updating the main track-view extents eg. while zooming in or out.
  • Automation curve node editing auto-smoothing revisited; also fixed input MIDI RPN/NRPN running status processing, which was crippling some plug-in automation curve nodes, when saved in high-resolution 14-bit mode.
  • Fixed the visual play-head position (vertical red line) while zooming in or out horizontally.
  • Almost complete overhaul on the configure script command line options, wrt. installation directories specification, eg. --prefix, --bindir, --libdir, --datadir and --mandir.
  • LV2 Plugin-in worker/schedule fix: make request/response ring-buffer writes in one go, hopefully atomic (suggested patch by Stefan Westerfeld, while on SpectMorph, thanks).

Flattr this

 

Enjoy && Keep the fun.

by rncbc at November 21, 2016 08:00 PM

November 20, 2016

digital audio hacks – Hackaday

Make Your Eyes Louder With Bluetooth Speaker Goggles

Your eyes are cool, but they aren’t very loud. You can remedy that with this build from [Sam Freeman]: a pair of Bluetooth speaker goggles. Combine a pair of old welders goggles with a Bluetooth receiver, a small amp and a couple of cheap speaker drivers and you’re well on your way to securing your own jet set radio future.

[Sam] found a set of speaker drivers that were the same size as the lenses of the goggles, as if they were designed for each other. They don’t do much for your vision, but they definitely look cool. [Sam] found that he could run the speakers for an hour or so from a small Lithium Ion battery that’s hidden inside the goggles, along with a large lever switch for that throwback electronics feel. The total cost of this build is a reasonably-low at $40, or less if you use bits from your junk pile.

The real trick is watching them in action and deciding if there’s any motion happening. Don’t get us wrong, they look spectacular but don’t have the visual feedback component of, say, the bass cannon. Look for yourself in the clip below. We might add a pair of googly eyes on the speakers that dance as they move, but that would get away from the more serious Robopunk look that [Sam] is going for. What would you add to build up the aesthetic of these already iconic goggles?


Filed under: digital audio hacks, wearable hacks

by Richard Baguley at November 20, 2016 09:01 AM

November 19, 2016

Libre Music Production - Articles, Tutorials and News

November 17, 2016

rncbc.org

Vee One Suite 0.8.0 - A Fall'16 release


Hello again!

The Vee One Suite aka. the gang of three old-school homebrew software instruments, respectively synthv1, as a polyphonic subtractive synthesizer, samplv1, a polyphonic sampler synthesizer and drumkv1 as yet another drum-kit sampler, are now into their twelfth beta, joining The Qstuff* Fall'16 release stream.

As before, all available in dual form:

  • a pure stand-alone JACK client with JACK-session, NSM (Non Session management) and both JACK MIDI and ALSA MIDI input support;
  • a LV2 instrument plug-in.

The common change-log for this joint-release:

  • LV2_STATE__StateChanged is now transmitted as a regular atom notification event, as far as to give some careful hosts enough slack to raise a dirty flag.
  • Fixed input MIDI RPN/NRPN running status processing.
  • Once forgotten, loop on/off setting is now consequential (samplv1 only).
  • Almost complete overhaul on the configure script command line options, wrt. installation directories specification, eg. --prefix, --bindir, --libdir, --datadir and --mandir.

The Vee One Suite are free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

So here they go, thrice again!

synthv1 - an old-school polyphonic synthesizer

synthv1 0.8.0 (fall'16) is out!

synthv1 is an old-school all-digital 4-oscillator subtractive polyphonic synthesizer with stereo fx.

LV2 URI: http://synthv1.sourceforge.net/lv2

website:
http://synthv1.sourceforge.net

downloads:
http://sourceforge.net/projects/synthv1/files

git repos:
http://git.code.sf.net/p/synthv1/code
https://github.com/rncbc/synthv1.git
https://gitlab.com/rncbc/synthv1.git
https://bitbucket.org/rncbc/synthv1.git

Flattr this

samplv1 - an old-school polyphonic sampler

samplv1 0.8.0 (fall'16) is out!

samplv1 is an old-school polyphonic sampler synthesizer with stereo fx.

LV2 URI: http://samplv1.sourceforge.net/lv2

website:
http://samplv1.sourceforge.net

downloads:
http://sourceforge.net/projects/samplv1/files

git repos:
http://git.code.sf.net/p/samplv1/code
https://github.com/rncbc/samplv1.git
https://gitlab.com/rncbc/samplv1.git
https://bitbucket.org/rncbc/samplv1.git

Flattr this

drumkv1 - an old-school drum-kit sampler

drumkv1 0.8.0 (fall'16) is out!

drumkv1 is an old-school drum-kit sampler synthesizer with stereo fx.

LV2 URI: http://drumkv1.sourceforge.net/lv2

website:
http://drumkv1.sourceforge.net

downloads:
http://sourceforge.net/projects/drumkv1/files

git repos:
http://git.code.sf.net/p/drumkv1/code
https://github.com/rncbc/drumkv1.git
https://gitlab.com/rncbc/drumkv1.git
https://bitbucket.org/rncbc/drumkv1.git

Flattr this

Enjoy && keep the fun ;)

by rncbc at November 17, 2016 08:00 PM

Linux – CDM Create Digital Music

Free jazz – how to use Ableton Link sync with Pure Data patches

Effortless wireless sync everywhere has arrived with free software, too, thanks to Ableton’s new open source SDK. And it’s incredibly easy – enough so that anyone with even rudimentary patching skills will probably want to try this out.

Pure Data, the free and open source cousin of Max/MSP, looks ugly but does great stuff. And it’s worth checking out even if you use Max, because Pd is lightweight and runs on any platform – including Linux, Raspberry Pi, iOS, Android, and inside other software (like game engines). Now that it supports Link, you can make patches that run anywhere and then jam together with them.

Let’s walk you through it step by step and get you jamming.

1. Grab the latest copy of Pure Data.

Leave that dusty ancient aversion of Pd aside. Because the “vanilla” version of Pure Data is now up to date and lets you instantly install any external or library, it’s the only one you likely need. (Pd extended is no longer supported.)

You’ll find it direct from Pd (and Max) creator Miller Puckette:

http://msp.ucsd.edu/software.html

2. Install the new Ableton Link external.

Here’s why you don’t need Pd extended any more – Deken is the awesome automatic external installer. (Think of it as a package manager for Pd.)

You’ll find the installer at Help > Find externals…

Type in abl_link~ in the search box.

Click the top choice (the one that isn’t grayed). A dialog box asks if you want to install to the Pd folder inside your library. Choose yes. (I only tested this on the Mac so far; I’ll be looking more at this build system in different environments as I’m teaching some workshops and going back to a triple-boot environment myself.)

Now, you can use the abl_link~ external in any Pd patch. (It installed to a path Pd searches for the active user.)

screenshot_639

3. Get some help

Create a new Object. Type abl_link~ into the Object box. If you don’t make any typos, you’ll see the Object box get a solid rectangular outline and inlet and outlets. Right-click (ctrl-click) on the Object and choose Help to bring up the external’s help file.

Read and look around. You’ll already see tempo and beat information and the like – that’s what Pd is generating internally and sending to any other Link-enabled apps on your network.

Now, this help file will be most interesting if something else on the wifi network supporting Link – like Ableton Live, or an iPad app, or Reason – is running. So go ahead and do that. Tick the Connect box, and now if you change tempo in one of those other apps, you’ll see the tempo and beat information change here, too.

Notice that you’ve got all the same information you have in, say, Ableton Live. You can see how many other apps are connected via Link. You can see the current tempo in bpm. You can see beats. And you get more precise data you can use in your own patches.

screenshot_642

4. Use that tempo information

Now you’ll need something to do with this info. The “step” information out that first outlet is the easiest to use. So for instance, you could feed that into a step sequencer — connect the bang output so you send a bang every quarter note (in 4/4), for instance, or connect to a counter.

There are two settings to pay particular note. One is the connect option – without this, you won’t receive incoming Link information from other apps. The other is resolution, which lets you divide beats. So for instance, if you want to divide those 4/4 quarter notes into eighth notes, set resolution to 2. Triplets, 3. Sixteenth notes, 4. And so on.

screenshot_643

For more precision you could do some maths on the “phase” information.

What’s cool about Link is, once you’re connected, any peer – any connected app – can change tempo information. And if one drops out, the beat keeps going. There’s none of the usual futzing with master/slave (server/client) data.

Here’s an incredibly stupid proof of concept, which creates a 4-step step sequencer synced to Link’s beats.

screenshot_641

You can paste this into a text editor, save as “peterhasastupidexample.pd” or something like that, and open it in Pd.

#N canvas 0 22 486 396 10;
#X obj 63 22 abl_link~;
#X obj 63 81 sel 0 1 2 3;
#X obj 61 115 vsl 15 128 0 127 0 0 empty empty empty 0 -9 0 10 -262144
-1 -1 4500 1;
#X obj 84 115 vsl 15 128 0 127 0 0 empty empty empty 0 -9 0 10 -262144
-1 -1 6800 1;
#X obj 108 115 vsl 15 128 0 127 0 0 empty empty empty 0 -9 0 10 -262144
-1 -1 9200 1;
#X obj 131 115 vsl 15 128 0 127 0 0 empty empty empty 0 -9 0 10 -262144
-1 -1 6400 1;
#X obj 77 51 nbx 5 14 -1e+37 1e+37 0 0 empty empty empty 0 -8 0 10
-262144 -1 -1 2 256;
#X obj 69 298 osc~;
#X obj 68 271 mtof;
#X obj 69 318 *~ 0.5;
#X obj 59 348 dac~;
#X connect 0 0 1 0;
#X connect 0 0 6 0;
#X connect 1 0 2 0;
#X connect 1 1 3 0;
#X connect 1 2 4 0;
#X connect 1 3 5 0;
#X connect 2 0 8 0;
#X connect 3 0 8 0;
#X connect 4 0 8 0;
#X connect 5 0 8 0;
#X connect 7 0 9 0;
#X connect 8 0 7 0;
#X connect 9 0 10 0;
#X connect 9 0 10 1;

But obviously the idea will be to start thinking about sequencing and time in your patches. Wherever that’s relevant, jamming just got more interesting.

Plus, because Pd patches run on other devices, you could make a little jam chorus of phones or tablets or whatever.

Note that the open source Ableton Link SDK is licensed under the GPL. If you want to use it in a commercial app, you can – but you’ll have to request a separate license from Ableton. (You’re free to use it in patches all you want, since you aren’t distributing anything.) As a testament to the fact that Ableton were bold enough to release free software, though, you can (and should) distribute your own open-source projects with the Link stuff included.

abl_link~ itself though is under a BSD license. So it’s compatible with either the GPL or the proprietary license. And that means you can dump it in patches and then move it from open to proprietary environments without worry.

But in fact, please don’t hesitate to distribute open source projects and share your patches and code. There’s a real chance here to benefit from some community.

5. Thank Peter Brinkmann.

Peter is the principle author of libpd and the creator of this external. (I was lucky enough to get to contribute to the libpd effort with him and … hope to continue contributing, somehow.)

You’ll find the code inside the libpd repository:

https://github.com/libpd/abl_link

6. Reward yourself with a free reverb.

You read this whole article! You worked hard. Sit back, relax, and install a reverb external.

Type “freeverb” into that box, and you’ll find a lovely reverb you can use in your patches.

7. Let us know how you’re using this.

We’d love to know.

Now get jamming. You just need a nice, cozy set.

We got nothing to play. – I’ll tell you what we’re gonna do.

What? – Jazz Odyssey.

The post Free jazz – how to use Ableton Link sync with Pure Data patches appeared first on CDM Create Digital Music.

by Peter Kirn at November 17, 2016 04:38 PM

GStreamer News

GStreamer 1.10.1 stable release

The GStreamer team is pleased to announce the first bugfix release in the stable 1.10 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.10.0. For a full list of bugfixes see Bugzilla.

See /releases/1.10/ for the full release notes.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

November 17, 2016 03:00 PM

November 15, 2016

Libre Music Production - Articles, Tutorials and News

November 14, 2016

rncbc.org

The QStuff* Fall'16 Release

Hello!

Although this being mostly a maintenance batch release and some kind of a checkpoint to this whole QStuff* life cycle, there's now this writing on the wall:

That written, all users and packagers are there strongly invited to update their pointers. Simply put: go for the grabs!

Enjoy and keep the fun!

 

QjackCtl - JACK Audio Connection Kit Qt GUI Interface

QjackCtl 0.4.4 (fall'16) is out!

QjackCtl is a(n ageing but still) simple Qt application to control the JACK sound server, for the Linux Audio infrastructure.

Website:
http://qjackctl.sourceforge.net
Project page:
http://sourceforge.net/projects/qjackctl
Downloads:
http://sourceforge.net/projects/qjackctl/files

Git repos:

http://git.code.sf.net/p/qjackctl/code
https://github.com/rncbc/qjackctl.git
https://gitlab.com/rncbc/qjackctl.git
https://bitbucket.com/rncbc/qjackctl.git

Change-log:

  • Fixed an early crash when the singleton/unique application instance setup option is turned off.
  • Almost complete overhaul on the configure script command line options, wrt. installation directories specification, eg. --prefix, --bindir, --libdir, --datadir and --mandir.

Flattr this

 

Qsynth - A fluidsynth Qt GUI Interface

Qsynth 0.4.3 (fall'16) is out!

Qsynth is a FluidSynth GUI front-end application written in C++ around the Qt framework using Qt Designer.

Website:
http://qsynth.sourceforge.net
Project page:
http://sourceforge.net/projects/qsynth
Downloads:
http://sourceforge.net/projects/qsynth/files

Git repos:

http://git.code.sf.net/p/qsynth/code
https://github.com/rncbc/qsynth.git
https://gitlab.com/rncbc/qsynth.git
https://bitbucket.com/rncbc/qsynth.git

Change-log:

  • Fixed a potential crash on the singleton/unique application instance setup.
  • Almost complete overhaul on the configure script command line options, wrt. installation directories specification, eg. --prefix, --bindir, --libdir, --datadir and --mandir.
  • Late French (fr) translation update. (by Olivier Humbert, thanks).

Flattr this

 

Qsampler - A LinuxSampler Qt GUI Interface

Qsampler 0.4.2 (fall'16) is out!

Qsampler is a LinuxSampler GUI front-end application written in C++ around the Qt framework using Qt Designer.

Website:
http://qsampler.sourceforge.net
Project page:
http://sourceforge.net/projects/qsampler
Downloads:
http://sourceforge.net/projects/qsampler/files

Git repos:

http://git.code.sf.net/p/qsampler/code, http://git.code.sf.net/p/qsampler/liblscp
https://github.com/rncbc/qsampler.git, https://github.com/rncbc/liblscp.git
https://gitlab.com/rncbc/qsampler.git, https://gitlab.com/rncbc/liblscp.git
https://bitbucket.com/rncbc/qsampler.git, https://bitbucket.com/rncbc/liblscp.git

Change-log:

  • Fixed a potential crash on the singleton/unique application instance setup.
  • Almost complete overhaul on the configure script command line options, wrt. installation directories specification, eg. --prefix, --bindir, --libdir, --datadir and --mandir.

Flattr this

 

QXGEdit - A Qt XG Editor

QXGEdit 0.4.2 (fall'16) is out!

QXGEdit is a live XG instrument editor, specialized on editing MIDI System Exclusive files (.syx) for the Yamaha DB50XG and thus probably a baseline for many other XG devices.

Website:
http://qxgedit.sourceforge.net
Project page:
http://sourceforge.net/projects/qxgedit
Downloads:
http://sourceforge.net/projects/qxgedit/files

Git repos:

http://git.code.sf.net/p/qxgedit/code
https://github.com/rncbc/qxgedit.git
https://gitlab.com/rncbc/qxgedit.git
https://bitbucket.com/rncbc/qxgedit.git

Change-log:

  • Fixed a potential crash on the singleton/unique application instance setup.
  • MIDI RPN/NRPN running status and RPN NULL reset command are now supported (input only).
  • Almost complete overhaul on the configure script command line options, wrt. installation directories specification, eg. --prefix, --bindir, --libdir, --datadir and --mandir.
  • Remove extra 'Keywords' entry and fix spelling (patches by Jaromír Mikeš, thanks).

Flattr this

 

QmidiCtl - A MIDI Remote Controller via UDP/IP Multicast

QmidiCtl 0.4.2 (fall'16) is out!

QmidiCtl is a MIDI remote controller application that sends MIDI data over the network, using UDP/IP multicast. Inspired by multimidicast (http://llg.cubic.org/tools) and designed to be compatible with ipMIDI for Windows (http://nerds.de). QmidiCtl has been primarily designed for the Maemo enabled handheld devices, namely the Nokia N900 and also being promoted to the Maemo Package repositories. Nevertheless, QmidiCtl may still be found effective as a regular desktop application as well.

Website:
http://qmidictl.sourceforge.net
Project page:
http://sourceforge.net/projects/qmidictl
Downloads:
http://sourceforge.net/projects/qmidictl/files

Git repos:

http://git.code.sf.net/p/qmidictl/code
https://github.com/rncbc/qmidictl.git
https://gitlab.com/rncbc/qmidictl.git
https://bitbucket.com/rncbc/qmidictl.git

Change-log:

  • Almost complete overhaul on the configure script command line options, wrt. installation directories specification, eg. --prefix, --bindir, --libdir, --datadir and --mandir.

Flattr this

 

QmidiNet - A MIDI Network Gateway via UDP/IP Multicast

QmidiNet 0.4.2 (fall'16) is out!

QmidiNet is a MIDI network gateway application that sends and receives MIDI data (ALSA-MIDI and JACK-MIDI) over the network, using UDP/IP multicast. Inspired by multimidicast and designed to be compatible with ipMIDI for Windows.

Website:
http://qmidinet.sourceforge.net
Project page:
http://sourceforge.net/projects/qmidinet
Downloads:
http://sourceforge.net/projects/qmidinet/files

Git repos:

http://git.code.sf.net/p/qmidinet/code
https://github.com/rncbc/qmidinet.git
https://gitlab.com/rncbc/qmidinet.git
https://bitbucket.com/rncbc/qmidinet.git

Change-log:

  • Almost complete overhaul on the configure script command line options, wrt. installation directories specification, eg. --prefix, --bindir, --libdir, --datadir and --mandir.

Flattr this

 

License:

All of the Qstuff* are free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

 

Enjoy && keep the fun!

by rncbc at November 14, 2016 08:00 PM

November 13, 2016

Libre Music Production - Articles, Tutorials and News

JACK-Matchmaker - new tool to autoconnect

jack-matchmaker is a small command line utility that listens to JACK port registrations by clients and connects them when they match one of the port pattern pairs given on the command line at startup. jack-matchmaker never disconnects any ports.

by admin at November 13, 2016 11:16 PM

Hydrogen 0.9.7 released

Hydrogen 0.9.7 released

The Hydrogen team have just announced version 0.9.7 of Hydrogen, the last planned release before version 1.0.

Along with tons of bug fixes and other smaller changes, the main new features are as follows -

by Conor at November 13, 2016 10:15 PM

November 11, 2016

open-source – CDM Create Digital Music

Micro-ritmos turns bacteria and machine learning into spatialized sound

In the patterns generated by bacterial cells, Micro-ritmos discovers a new music and light.

From the Mexican team of Paloma López, Leslie García, and Emmanuel Anguiano (aka Interspecifics), we get yet another marvel of open source musical interface with biological matter.

Micro-ritmos from LessNullVoid on Vimeo.

The raw cellular matter itself is Geobacter, an anaerobic bacteria found in sediment. And in a spectacular and unintentional irony, this particular family of bacteria was first discovered in the riverbed of the Potomac in Washington, D.C. You heard that right: if you decided to literally drain the swamp in the nation’s capital, this is actually what you’d get. And it turns out to be wonderful stuff, essential to the world’s fragile ecosystems and now finding applications in technology like fuel cells. In other words, it’s a heck of a lot nicer to the planet than the U.S. Congress.

So if composers like Beethoven made music that echoed bird tweets, now electronic musicians can actually shine a light on some of the materials that make life on Earth possible and our future brighter.

Leslie, Paloma, and Emmanuel don’t just make cool performances. They also share the code for everything they’ve made under an open source license, so you can learn from them, borrow some sound synthesis tricks, or even try exploring the same stuff yourself. That’s not just a nice idea in theory: good code, clever hardware projects, and clear documentation has helped them to spread their musical practice beyond their own work.

bacteria1

Check it out here – in Spanish, but fairly easy to follow (cognates are your friend):
https://github.com/interspecifics/micro-ritmos

The basic rig:

RaspberryPi B+
RasPi camera module
Micro SD cards
Arduino
Bacterial cells
Lamps
SuperCollider for sound synthesis

The bacterial acts as a kind of sophisticated architectural sonic spatializer. Follow along – the logic is a bit Rube Goldberg, mixed with machine learning.

The bacteria trigger the lights, variations in the cells generating patterns.

Machine learning coded in Python then “watches” the patterns, and feeds that logic into both sound and spatialization. Sound is produced from synthDefs in the open source SuperCollider sound coding environment, and positioned in the multichannel audio system, all via control signal transmitted from the machine learning algorithm via OSC (OpenSoundControl).

Imagine the bacteria are live coding performers. They generate a kind of autonomous, real-time graphical score for the system.

In some way, this unstable system is a modern twist on the experiments of the likes of Cage and Tudor. But whereas they found these sources in the I Ching and unpredictable circuitry and feedback systems, respectively, here there’s a kind of grounding in some ecological, material microcosm.

It’s funny, the last day I was in Mexico City, I saw an exhibition of organic architecture by the Mexico City native Javier Senosiain. Senosiain built homes that found some harmony with their natural environment, forms from organic material. Here, there’s a similar relationship, scaled from microcosm to macrocosm in unpredictable ways. But that means that this is not sound synthesis that establishes some dominion over nature; it allows this cells some autonomy to produce the composition of the piece. And I don’t just mean that in some lofty philosophical sense: the sonic results are radically different.

Beautiful work, presented for the first time in Medellín, Colombia.

More soon from this trio, as we worked together in Mexico City last month with MUTEK.mx.

The post Micro-ritmos turns bacteria and machine learning into spatialized sound appeared first on CDM Create Digital Music.

by Peter Kirn at November 11, 2016 06:37 PM

November 09, 2016

ardour

Ardour Looks, part 197.2

From a discussion on http://cdm.link about FL Studio:

For those who wonder why I tend to ignore/discount "Ardour needs to look better" comments, this is a great introduction :)

by paul at November 09, 2016 02:14 AM

November 08, 2016

Libre Music Production - Articles, Tutorials and News

Two high quality drumkits for DrumGizmo

Two high quality drumkits for DrumGizmo

Through a collaboration with Michael Oswald, Libre Music Production are proud to present two high quality open source drumkits
for DrumGizmo.

Michael has written a tool (DGPatchMaker) in the programming language Haskell to create DrumGizmo drum kits from existing sample libraries, and using that, he has created patches for the free SM Mega Reaper Drumkit and the Salamander kit.

by admin at November 08, 2016 11:25 PM

November 04, 2016

open-source – CDM Create Digital Music

There’s a new way to make your iPhone run any Pd patch, free

You’ve got an instrument or effect running in Pure Data, for free, on your computer. (If you don’t know how to do that, more in a moment.) Leave the computer at home. Play that sound creation on your iPhone (or iPad).

The implementation of Pd on iOS and Android started its life with RjDj. But PdParty (and PdDroidParty before it) have gone steadily further. Now you can almost treat the graphical patching environment Pd on the computer as your development environment – patch away on your computer, then duplicate that patch complete with UI on your phone. It also means that you can ditch the laptop and run everything on an iOS gadget, perfect for integrating Pd with other gear. (There are other hardware solutions to that, too – I’ll have to do a round-up soon.)

PdParty goes the furthest yet.

Here’s how it works:

You patch normally in Pd. Once you’re thinking you’ll run a patch on the iOS gadget, there are templates that help you adapt to that scenario – with audio in and out, and the appropriate screen layouts. Choose those widgets you want for the UI, organize them so they’ll fit on the screen, and wire up sound and make some minor adjustments, and you’re good to go.

Your iOS device then runs a server that lets you load onto the phone/tablet.

But wait – there’s more. PdParty adds some features on top of that, some inspired by RjDj and PdDroidParty, but some new.

All the fixin’s

Custom widgets make it easy to adjust audio input level, turn on or off sound operation, and start or stop recording of whatever you’re doing (perfect for capturing ideas).

Play back files via a prepared widget.

OSC. Send and receive OpenSoundControl messages.

MIDI. Send and receive MIDI – now, that works with other apps, with connected hardware, and over a network to a Mac (that should be hackable to PC, too).

Game controllers. MiFi game controller input support works, too, on top of those MIDI gadgets.

Use sensors. You can also read data from the iOS gadget’s various sensors – that includes motion, location, and other inputs.

Backwards compatibility. Out of the box, you can add scenes from tools like RjDj.

Native widgets for UI. Basically, Pd sliders and checkboxes and knobs all work on the iPhone. It’s the next best thing to running Pd directly (which isn’t possible — yet).

Who would do such a thing?

Why, Dan Wilcox would. The need to do so became apparent because Dan was regularly in the habit of dressing up as a robot and running around parties playing music. Clearly, strapping an iPhone or iPod touch to a belt then makes loads of sense.

What? Of course this is the use case. It’s obvious. (Thanks, Dan, as always – brilliant engineering work, applied to brilliant party ideas. That’s the power of Pd party engineering.)

3632901050_ec39f575af

belt_setup

Getting started

This is definitely for people interested in Pd patching. But it could also be a fun way to start learning.

You’ll want to work with vanilla Pd patches, but you can add rjlib. Actually, even if you’re not terribly good at patching, you can use rjlib as a free library of lots of cool synths and effects and so on, plus a mess of abstractions that make life easier.

With rjlib in hand, I think anyone could get something working in a few days. I recommend the following resources to get started:

flossmanuals.net/pure-data/ (a bit easier)

http://pd-tutorial.com/ (a bit more focused on synthesis/sound – including some stuff the other link leaves out – and available in German and Spanish as well as English)

If you have a tutorial in mind, though, I’m thinking of writing a simplified one. It could be a nice way to celebrate 20 years of Pd.

And if you’re in New York, later this month there’s a conference.

http://www.nyu-waverlylabs.org/pdcon16/

Plus, if you want to write full-fledged mobile apps powered by Pd, check libpd. I’m working on some updates to this shortly. (Teaching Pd at the moment is helping, for sure!)

http://libpd.cc

The post There’s a new way to make your iPhone run any Pd patch, free appeared first on CDM Create Digital Music.

by Peter Kirn at November 04, 2016 07:38 PM

November 02, 2016

GStreamer News

GStreamer 1.10.0 stable release (binaries)

Pre-built binary images of the 1.10.0 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

See /releases/1.10/ for the full list of changes.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

November 02, 2016 04:10 PM

November 01, 2016

GStreamer News

GStreamer 1.10.0 stable release

The GStreamer team is proud to announce a new major feature release in the stable 1.x API series of your favourite cross-platform multimedia framework!

As always, this release is again packed with new features, bug fixes and other improvements.

See /releases/1.10/ for the full list of changes.

Binaries for Android, iOS, Mac OS X and Windows will be provided shortly after the source release by the GStreamer project during the stable 1.10 release series.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

November 01, 2016 04:30 PM

Libre Music Production - Articles, Tutorials and News

digital audio hacks – Hackaday

FingerRing is Simplest Multichannel Mixer

It’s hard to make an audio mixer with any less technology than FingerRing (YouTube video, embedded below). We’re pretty sure that [Sergey Kasich] isn’t going to get a patent on this one. But what he does get is our admiration for pushing a simple idea far enough that it’s obviously useful.

The basic idea is transmitting signals using the human body as a conductor. What [Sergey] does is lay out multiple sound sources and sinks on the table, and then play them like a mixer made musical instrument. Pressing harder reduces the resistance, and makes the sound louder. Connecting to two sources mixes them (in you). Watch the video — he gets a lot of mileage out of this one trick.

We can think of a number of improvements to this system. A bunch of nails in a board acting as the contact points would be a lot easier to play than 1/4″ cables taped down to a desk. You could make it a permanent instrument. If you were designing the system from scratch, you’d want high-input-impedance amplifiers on the receiving end. Add a notch filter to kill the mains hum, or an instrumentation amp and another electrode on your ankle. Pretty soon, you’re an EKG and a mixing desk.

The biggest limitation he has is that the body is just one big conductor. Can anyone think of how to get multiple channels through flesh and bone without making the whole setup over-complicated? Now excuse us as we tape instrument cables all over our desk. We’re inspired.


Filed under: digital audio hacks

by Elliot Williams at November 01, 2016 11:01 AM

October 30, 2016

Libre Music Production - Articles, Tutorials and News

Scores of Beauty

Distributed Editing, Responsibility and Quality

Recently a concern with distributed editing concepts has been brought to my attention. As this adds to other existing reservations I have already been aware of and as I’ll coincidentally be talking about exactly that matter at a “Winterschool” conference in a few weeks this is an opportunity for a post about responsibilities in distributed edition workflows. I am convinced that any reservations about compromising quality of musical editions by giving up established workflows or by incorporating the work of multiple contributors are completely unfounded, or rather solely based on fear of the unknown and fear of change. In fact distributed work with text based tools and version control gives lots of flexibility and exciting new possibilities and adds multiple layers of safety nets rather than adding risks.

Traditional Editing Chains

Workflows in publishing houses or academic edition institutes are characterized by a clear separation of concerns and responsibilities. Of course there are differences but in some way or the other they are variations of a document flow like:

  • Existing copy
    (Editor works on an arbitrary pre-existing score)
  • Review/engraver’s copy
    (Check editor’s work)
  • Typesetting
    (Enter the music in a new score document)
  • Proof-reading/editing loop
    (Check typesetting against engraver’s copy, re-check editor’s work)
  • Engraving fine-tuning
    (Bringing the score to publication quality (hopefully there won’t be any content changes required after that step)
  • Compilation of volume/prepress

with a number of possible reiteration loops. While individual persons may be assigned to more than one of these stages the point is that the stages are cleanly separated and responsibility for each stage is clearly attributed.

Passing Files Around

One major problem with this traditional toolchain is the need to constantly pass around files and copies of files. In an earlier post I outlined the serious problems that arise from that and how working with LilyPond and a version control system like Git simply makes them vanish. These advantages alone are probably sufficient to decide to switch to using a version control system.

Working on a Common Data Repository

At the core of a distributed workflow there is a common data repository which is controlled by a version control system like Git and hosted on a central server. Of course there are many exciting things to that but for today I’ll only mention one: As everybody has parallel access to all project files and all tools are freely available, technically each team member can perform any task whenever it’s necessary or they feel like it. An extreme manifestation of this would be a project where all responsibility is fully shared by all team members, leaving the actual process to self-organization.

This prospect seems frightening to people who are used to traditional editing workflows, and there are two reservations commonly expressed with regard to such a concept. Some people worry about quality control when access to the data isn’t restricted by a hierarchical division of labor, and some simply do not want their responsibilities changed and weakened (fearing that might open the door to anarchy and chaos).

Traditional Workflow With Version Control

The first thing that has to be said here is that version controlled set-ups do not require you to go all the way. Even with version control it is possible to model a completely traditional approach to the editing toolchain. One person may enter the music while another is proof-reading it, then the main editor does his critical review, after which the edition is proof-read again and the engraving beautified by a professional engraver. Finally a graphic designer could combine the score with textual elements and do the pre-press to submit the final compiled volume to the printer. Responsibilities can be tailored exactly like with other toolchains if this is desired. Already this would be an improvement, especially in terms of quality control.

At a fundamental level the basic difference to traditional toolchains is that in version-controlled environments documents don’t have to be passed around through shared drives or by email. Through this alone all the hassles and potential issues that arrive from creating digital copies of documents become obsolete. It is for example inconceivable to mess up a document by having two persons edit different copies of it independently. Also it is more or less impossible that changes to a document would go by unnoticed just because the last editor failed to document them. Put the other way round: it is not necessary to accompany a modified document with an email listing all the modifications done to it because the person working on it next can simply check the commit to see what has been done:

A “commit” reveals the detailed changes to a file. Click to see the full commit online.

A “commit” reveals the detailed changes to a file. Click to see the full commit online.

Already at this level it should become clear that using collaborative tools actually increases the level of quality control rather than giving way to poor standards or compromise.

Experiencing the Benefits of Version Control

Add to that the additional virtues of version control by stepping back from the strictly sequential workflow of the olden days and by loosening the fixed distribution of responsibilities. By allowing contributors to perform different tasks based on their skill set and current availability they have to waste much less time by waiting for appropriate work to flow in. If there should currently be no music to be proof-read an engraver could instead spend his time entering new music or working on the overall appearance of the engraving. (Basically this is an opportunity to implement the Kanban methodology from software development in musical edition processes.)

But more importantly version control provides an additional safety net through the possibility of working in isolated sessions (or branches). Work on a given topic (for example “the critical review of the second movement” or “entering the fingerings from the composer’s copy”) can be encapsulated in such a branch, and only when this task has been completed will the work be integrated (we say “merged”) into the main line or “master” branch. That master branch – which can be understood as representing the official state of the edition – remains unaffected up to this point of merging and proceeds directly from one consistent state to another. This functionality ensures that different people can work on different tasks in parallel, without any risk of causing confusion or messing up the documents. Additionally it is possible to install an additional layer of quality control by deciding who is eligible to actually perform that merge step.

So collaborative work does not cause more confusion – quite the contrary.

Constant Peer Review

Version controlled collaborative workflows not only take care of a more robust editing environment, but they actually allow assigning tasks to an arbitrary number of contributors and managing them reliably, which makes it possible to organize projects in completely new ways – without compromising quality.

About two years ago I wrote a number of posts on this blog documenting a “crowd engraving” project where we successfully experimented with exciting workflow techniques. What I found particularly intriguing was the extent to which contributors of wildly varying qualification could produce high-quality material given an appropriate project set-up. Our workflow was arranged around splitting up the huge project (the end result was 50 minutes of full orchestra with choirs and soloists, densely printed on 100 pages of A3 paper) into small chunks. Every little contribution was done in a separate branch, and the agreement was that whenever someone had finished entering some music someone else had to review it before merging back to the master branch. This approach – which can of course be equally applied to the stage of scholarly review – had several important implications which I’d like to sum up with the term constant peer review. The most obvious consequence was that every single measure of music integrated in the “official” score had already been proof-read once, that is seen by at least two pairs of eyes. So we didn’t permanently live under the pressure of “someday” having to do the proof-reading.

Not as obvious but at least equally important is the fact that such short-term peer review encourages direct communication between contributors. While this doesn’t necessarily sound dramatic it is actually boosting both creativity and scholarly scrutiny, as I’ve described in an earlier short post. In our project we made use of the scholarLY library to maintain annotations within the score document. And these things together actually had mind-blowing consequences. Contributors had the possibility to add “musical issue”-type annotations, pointing to problematic spots in the score. Knowing that someone else would be looking at the annotation before merging it (either commenting on it, changing it to a proper “critical remark” or even discarding it) significantly lowered the bar for people to spell out their observations. It was truly inspiring to see that the quality of these observations was very much independent of the formal qualification of the contributor. In other words: when entering music the hobby musician bank accountant noticed issues with the manuscript just as I did, and knowing there would be the musicologist taking over responsibility he didn’t hesitate to document them.

Full Documentation

As a closing remark I’ll comment on the feature that may provide the most fundamental safety net among all the bells and whistles of version control: full and automatic project documentation. I won’t go into detail here (maybe look at some of our posts tagged with version control) but documenting any modification and attributing it to its author, and the possibility to edit and revert any such change selectively at any later time are invaluable tools that massively increase the safety and eventually also the quality of the editorial results. And as a second aspect this fully documents each team member’s contributions, making it possible to credit the actual work in a pretty fine-grained manner.

The point of this story is that versioned workflows give projects a level of control that traditional approaches can’t even come near. There is absolutely nothing to be afraid of: neither loosening the attribution of responsibilities nor the inclusion of arbitrary numbers of contributors of different qualification pose any risk of weakening the quality standards of the resulting edition. Quite the contrary, properly applied strategies from software development can help to significantly boost creativity, scholarly scrutiny and overall efficiency and quality of any music edition project.

by Urs Liska at October 30, 2016 07:36 PM

Libre Music Production - Articles, Tutorials and News

LMP Asks #21: An interview with Yassin Philip

 LMP Asks #21: An interview with Yassin Philip

Hi Yassin, thank you for taking the time to do this interview. Where do you live, and what do you do for a living?

by yassinphilip at October 30, 2016 06:59 PM

October 25, 2016

digital audio hacks – Hackaday

Death To The 3.5mm Audio Jack, Long Live Wireless

There’s been a lot of fuss over Apple’s move to ditch the traditional audio jack. As for me, I hope I never have to plug in another headphone cable. This may come off as gleeful dancing on the gravesite of my enemy before the hole has even been dug; it kind of is. The jack has always been a pain point in my devices. Maybe I’ve just been unlucky. Money was tight growing up. I would save up for a nice set of headphones or an mp3 player only to have the jack go out. It was a clear betrayal and ever since I’ve regarded them with suspicion. Is this the best we could do?

I can’t think of a single good reason not to immediately start dumping the headphone jack. Sure it’s one of the few global standards. Sure it’s simple, but I’m willing to take bets that very few people will miss the era of the 3.5mm audio jack once it’s over. It’s a global episode of the sunk cost fallacy.

In the usual way hindsight is 20/20, the 3.5mm audio jack can be looked at as a workaround, a stop over until we didn’t need it.  It appears to be an historic kludge of hack upon hack until something better comes along. When was the last time it was common to hook an Ethernet cable into a laptop? Who would do this when we can get all the bandwidth we want reliably over a wireless connection. Plus, it’s not like most Ethernet cables even meet a spec well enough to meet the speeds they promise. How could anyone reasonably expect the infinitely more subjective and variable headphone and amplifier set to do better?

But rather than just idly trash it, I’d like to make a case against it and paint a possible painless and aurally better future.

Ingress

Let’s say you had to design a consumer facing device that goes in someone’s pocket. A pocket is dusty. It’s moist and sweaty. You know your stuff so you’re already thinking about gaskets and IP ratings. Then someone hands you the spec sheet. They let you know that they want you to drill a hole right in it and put an unserviceable deep hole in the case. Now rinse repeat for every portable device on the planet and it seems like an odd mass hallucination.

I guess if someone were having a really bad day they could spill coffee at the switchboard… [CC Joseph C.]There is no good way to seal or maintain a 3.5mm headphone jack. Some phone makers have tried by adding a little gasket or a flap, but this doesn’t last. There’s also a chance that it could be sealed off, but since it has to have little springs inside and holders it’s still susceptible to damage from liquids and dust by nature. I’ve even seen some get irreparably corroded by the salt from sweat alone.

It’s like we all agreed to ignore the fact that these connectors were designed to be used in a switch board. A nice clean dry switchboard in a professional location where it would be used by trained personnel and serviced regularly. It was designed to be an easy to use connector that could be plugged in and removed quickly for low-quality audio phone switching. It was never designed to be the end-all connector for quality audio signals. Moving it out into the world could arguably have just been a quick hack. Using a connector that was already adopted and manufactured on a large enough scale when home audio began to be a common thing.

Since we’ve already gotten rid of the keyboards on mobile devices (which is a shame, but that’s another article). Since every manufacturer seems to be horribly committed to irreplaceable batteries. There’s just no reason not to move towards fully waterproof and dustproof devices. There could at least be a bright side. The audio port is holding us back.

Cable Strain

It’s not the cord’s fault. It was sent to the frontlines without the right equipment.  [CC Paul Hussey]Next comes cable strain. People like to complain about how the iPhone earbuds would constantly break at the joint. This is true, and other brands had better strain relief. However, it’s also true that all audio cables that go into a pocket will break before any of the other components will reach their end of service life. By nature, a pocket exceeds every reasonable expectation of in-tolerance cable strain. It is a hostile environment. My last set of headphones went through two cables during regular use. Which segues right into the next design flaw, force.

Force

As mentioned before, the audio connector was designed to be easily inserted inside a switch board room. It would see no dramatic force on it. So it’s a tall connector that is easy to hold and easy to use. It also is supposed to be a low insertion force connector. So it’s unreasonable to expect it to be able to hold a cable in place reliably.

However, when put into a pocket it suddenly sees forces perpendicular to its axis. This can cause some extremely large moments on a very tiny plastic and spring-metal socket. We all know that the longer we own our phones the less able our headphone socket will be to hold the jack in place. There’s simply no way to design something that small to take that much force and keep it cost effective. Rather it looks like we’ve just adjusted our expectations and then forgot that we even made that adjustment.

This seems even more insane from a design perspective when you consider that this connector which sees dramatic forces is actually attached to the mainboard of your device (to be fair, most smartphones do use spring connectors for jack to mainboard but think about laptops and other gear). Solder connections are not flexible. The metals we use for solder are very susceptible to work hardening and breaking under cyclical forces. So not only do you flex the connection of the port to the board itself, you also flex all the surrounding components. So It’s no mystery that one of the most common repairs on mobile devices are the audio and USB ports.

Sound Quality

Bluetooth’s codecs perform comparably 320kbps mp3. Which is beyond the ability of most listeners (including the author) to distinguish. From Serene Audio.

Right now there is still a difference in sound quality between Bluetooth and wired. There’s no reason to expect it to last long. Bluetooth is now capable of some seriously impressive bandwidth and with an actual market erupting for the headsets, it won’t be long before this is a moot point. I’m picking on Bluetooth specifically because it’s the only standard that’s both universal and intended, at least, for hooking peripherals up.

There’s a big argument for the sound quality aspect of the 3.5mm headphone jack. I think that, frankly, most of them make no sense against the transition. If you’re sitting still in your home-listening-chamber with a perfectly tuned preamplifier connected to quality headphones while listening to FLAC audio from your dedicated music computer you might be able to hear a perceptible difference from hooking directly to your phone with a Bluetooth headset. But you’re not. You have a noisy connection from a worn out port to a low quality cable with an unamplified signal to some cost engineered headphones. It’s a wash I think.

Plus, it’s not like switching to a wireless standard is going to absolutely kill the wired headphone market. You’ll still be able to get wired headphones for when the wire matters. People who are paying a hundred dollars plus for quality sound out of a wired headset will still have their toys. That market is very far from death. People who were paying ten bucks for whatever are not going to notice at all.

Most phones and portable devices waste zero energy trying to amplify the signal in a meaningful way. So if you want the full range of your headphones you have to add an amplifier. Then there’s the fact that they’re already class D audio amps trying to maximize the device’s battery life. By the time it gets to your ear it’s been triple digitized to death. Fortunately, we now have more processing power inside greeting cards than we reasonably know what to do with, so it’s unlikely that most would notice the difference.

However, the modern Bluetooth audio chips are actually really great, they’re only getting better. They’re ultra-low power class D amplifiers which were built and optimized for sound quality. With a lithium battery right there inside the headphone there’s no reason not to expect engineers to take advantage of that and stop designing every driver in the world to run off the two or three magic pixies a cell phone is willing to give it. It should actually be possible to have significantly better sounding wireless headphones than wired.

Convenience and User Experience

It’s a cross-cultural joke at this point.

I bought a very cheap set of Bluetooth headphones off Amazon. I have rarely been so pleased with a purchase. Did they sound good? Not really, but I don’t expect any ten dollar headset to sound good. What I did get was an average of ten days of on and off use before the battery needed charging. I could go to the climbing gym and leave my cellphone on the ground while I climbed. When I worked on projects in the hackerspace I could walk up to thirty feet from my phone and not miss a word of my audio book. It connected automatically. It played nice. It was a better experience in every way.

With my headphones I’m always fighting with the cable. I’m always arranging my phone in my pocket so the cord isn’t flexed too much. It’s a cultural meme that headphones know more knots than we do.

Sure there are some flaws of the Bluetooth. Will we cover battery replacement hacks in a few years? Probably. Will there be growing pains? Of course. Will they be ironed out in the next few years? Most likely.

Transition:

So how do we transition? Well, the first step is done. Have a big player finally give up on the port. It’s time. But what about all the things that are nice about corded headphones? The global standard? The fact that you can contribute to the complete devastation of our planet by buying them cheaply by the pound instead of being a grown adult who can hold on and take care of a quality item? How about their universal integration with every device that wants to put a sound out?

It’s not like we don’t have other really nice global standards that could power a headphone set. [CC Maurizio Pesce]But we do have other global standards that can transmit sound signals. We have USB. While I hedge to give Apple too much credit after they threw their lot in with Beats, in this regard they are also showing the way. A dongle is an inelegant example, however, only as a transition out of the 3.5mm port. What if your headphones just had a USB C port on one end and you could plug the cable of choice right into your mobile. The phone has the ability to power some accessories and as long as it’s designed to switch off the charging circuit while it’s at it, there’s no reason it won’t work. We can all transition painlessly. We really won’t miss it.

Laptops could definitely simultaneously charge and play. If your battery is running low, just hook it up to USB. You get the cord experience and the universal standard experience we’ve all come to love. Just without a weird analog connector from the birth of electronics. All the LEGO pieces are there, we just need to build the spaceship.

All that is pedantic though. Portable audio has never been a power-hungry game and in the end I just don’t think people will notice the cable woes. I thought I would and I don’t. I’m already so used to plugging things in when the situation requires that I just do it and that’s that.

It’s time for the 3.5mm legacy to go. I hope others follow Apple’s lead. I hope all the major headphone makers turn their eyes to wireless audio and the possibilities it offers. There are already quality sets out there and it will only get better. I won’t miss it. I don’t miss magnetic hard drives. I don’t miss CDs and Mini Disks. I haven’t tuned the bunny ears on a television in at least a decade. I don’t even own an Ethernet cable nor have I used a DB9 serial cable for hardware development in years. The future moves on and this time I think it will show itself to move in exactly the right direction.


Filed under: Current Events, digital audio hacks, Featured, rants, slider

by Gerrit Coetzee at October 25, 2016 02:01 PM

October 24, 2016

Libre Music Production - Articles, Tutorials and News

Newsletter October 2016 – LMP Asks interview, four tutorials and lots of FLOSS news!

Our newsletter for October is now sent to our subscribers. If you have not yet subscribed, you can do that from our start page.

You can also read the latest issue online. In it you will find:

  • 'LMP Asks' interview with Marius Stärk
  • Four new tutorials
  • Lots of new software release announcements

and more!

by admin at October 24, 2016 11:23 PM

October 21, 2016

open-source – CDM Create Digital Music

Watch an amazing unboxing and jam with MeeBlip triode

Working in the synth business is basically one of the most fun things you can do. So in addition to the pleasure of getting reports from owners, we wake to total surprises like this video from Olivier Ozoux, who has made a terrific stop motion unboxing video and live jam with the synth.

MeeBlip joins the Korg electribe sampler and Squarp Pyramid sequencer for a rather fine all-hardware setup. You watch the triode emerge from its box, where it’s been hand-packed by MeeBlip creator James Grahame, then dive into the jam. (He manages to make the resonance sound like an extra percussion part at one moment.)

Wait for it – around 1:13 the sub kicks in. I do this for a living and I still get irrational glee out of bass.

The second batch of MeeBlips triode are about to hit assembly and shipping now.

http://meeblip.com

I hadn’t seen Olivier’s series, and now realize it’s full of charming videos like this. Subscribed – for real.

For instance, speaking of open source hardware, here’s a film of the PreenFM2, assembled into a gorgeous, futuristic white 3d-printed case:

Subscribe to his Musique Électronique on YouTube

The post Watch an amazing unboxing and jam with MeeBlip triode appeared first on CDM Create Digital Music.

by Peter Kirn at October 21, 2016 05:11 PM

Libre Music Production - Articles, Tutorials and News

LSP Plugins version 1.0.14 released!

LSP Plugins version 1.0.14 released!

Vladimir Sadovnikov has just released version 1.0.14 of his audio plugin suite, LSP plugins. All LSP plugins are available in LADSPA, LV2, LinuxVST and standalone JACK formats.

by Conor at October 21, 2016 08:07 AM

October 20, 2016

Linux – CDM Create Digital Music

PiDeck makes a USB stick into a free DJ player, with turntables

There’s something counterintuitive about it, right? Plug a USB stick into a giant digital player alongside turntables. Or plug the turntables into a computer. What if the USB stick … was the actual player? In the age of rapid miniaturization, why hasn’t this happened yet?

Well, thanks to an open source project, it has happened (very nearly, anyway). It’s called PiDeck. And it radically reduces the amount of gear you need. You’ll still need an audio interface with phono input to connect the turntable, plus the (very small, very cheap) Raspberry Pi. But that’s just about it.

Connect your handheld computer into a turntable, add a control vinyl, and you’re ready to go. So your entire rig is only slightly larger than the size of two records and some gear the size of your two hands.

You have a rock-solid, Linux-based, ultra-portable rig, a minimum of fuss, essentially no space taken up in the booth – this all makes digital vinyl cool again.

It works with USB sticks (even after you yank them out):

And you can scratch:

Their recommended gear (touchscreens these days can be really compact, too)

  • A recent Raspberry Pi (only Pi 3 model B tested so far) and power supply. First generation Raspberry Pi’s are not supported, sorry
  • Touchscreen (single-touch is enough), or a HDMI monitor and keyboard
  • Stereo, full-duplex I2S or USB soundcard with a phono input stage, or line input and an external pre-amp, soundcard must be supported by ALSA
  • Micro SD card for the software, at least 2GB in size, and an adaptor to flash it with
  • Control vinyl, Serato CV02 pressing or later recommended
  • USB stick containing your favourite music. FLAC format is recommended (16-bit 44100Hz format tested)
  • Non-automatic record player that can hold speed, with a clean, sharp stylus. It helps scratching if the headshell and arm are adjusted correctly
  • Slipmat, made from felt or neoprene
  • Sheet of wax paper from the kitchen drawer, to go under the slipmat

Previously from this same crew (more just a fun proof of concept / weird way of DJing!):

This is how to DJ with a 7″ tablet and an NES controller

Check out the project site:

http://pideck.com

And you can download this now – for free.

https://github.com/pideck/pideck-distro/releases/

pideck-reverse-side

pideck-spinning

Developer Daniel James writes us with more details on what this whole thing is about:

Chris (in cc) and I have been working on the project in spare time for a couple of months, here on the Isle of Wight. Chris built the hardware prototype and did most of the work on the custom Debian distro.

The idea behind the PiDeck project is to combine the digital convenience of a USB stick with the hands-on usability of the classic turntable, in a way which is affordable and accessible. The parts cost (at retail) for each PiDeck device is currently about £150, not including a case or control vinyl. There is no soldering to do; the hardware screws and clips together.

I used to run DJ workshops for young people, and found that while the kids were really happy to get their hands on the decks, a lot of them were put off by having to use the laptop as well, especially the younger kids and the girls. The teenage boys would tend to crowd around the laptop and take over.

Then there’s the performance aspect of real turntables which some digital controllers lack, and the sneaking suspicion that the computer is really doing the mixing, or worse still, just running through a
playlist. PiDeck doesn’t have any mixing, sync or playlist features, so the DJ can take full credit (or blame) for the sound of the mix.

We’ve deliberately put no configurable options in the interface, and there are no personal files stored on the device. This helps ensure the PiDeck becomes part of the turntable and not unique, in the way that a laptop and its data is. This makes the PiDeck easier to share with other DJs, so that there should be no downtime between sets, and should make it easier for up-and-coming DJs to get a turn on the equipment. If a PiDeck breaks, it would be possible to swap it out for another PiDeck device and carry right on.

Although the DJ doesn’t have any settings to deal with, the software is open source and fully hackable, so we’re hoping that a community will emerge and do interesting things with the project. For example, multiple PiDeck devices could be networked together, or used to control some other system via the turntable.

Yeah – this could change a lot. It’s not just a nerdy proof of concept: it could make turntablism way more fun.

The post PiDeck makes a USB stick into a free DJ player, with turntables appeared first on CDM Create Digital Music.

by Peter Kirn at October 20, 2016 07:22 PM

Libre Music Production - Articles, Tutorials and News

October 19, 2016

Libre Music Production - Articles, Tutorials and News

Using AVL drumkits with a-Fluid Synth in Ardour

Using AVL drumkits with a-Fluid Synth in Ardour

In this tutorial I will show you how to use Glen MacArthur's fantastic AVL Drumkits sample pack with 'a-Fluid Synth', Ardour's built in FluidSynth plugin. I will also show you how to load midnam files to make it easier to do drum programming within the DAW.

by Conor at October 19, 2016 09:12 AM

AVL Drumkits updated to version 1.1

AVL Drumkits updated to version 1.1

Glen MacArthur, maintainer of AVLinux, has just announced version 1.1 of his AVL Drumkit sample pack intended to bring an "authentic acoustic, organic drum sound to your MIDI DAW arrangements and preserve real-world characteristics such as tom ringing and overtones unlike many General MIDI kits that sound sterile."

by Conor at October 19, 2016 08:31 AM

October 14, 2016

News – Ubuntu Studio

Ubuntu Studio 16.10 Released

We are happy to announce the release of our latest version, Ubuntu Studio 16.10 Yakkety Yak! As a regular version, it will be supported for 9 months. Since it’s just out, you may experience some issues, so you might want to wait a bit before upgrading. Please see the release notes for a complete list […]

by Set Hallstrom at October 14, 2016 11:26 PM

OSM podcast

MOD Devices Blog

MOD at Waves Vienna

Hello music-makers!


It’s me, Adam again - MOD Devices mad resident music hacker & polymath at your service. If you’ve been at basically any music-focused hackathon in the last few years, chances are we’ve met already. I’m also going to be at a bunch of the amazing upcoming music hacking events, including Music Hackday Berlin along with my awesome colleagues from MOD. Quite a treat considering we were just at the amazing Waves Vienna Music Hackday a few weeks ago, where we challenged participants to come up with the best way to transform gestures into sounds using the MOD Duo. If you haven’t already, check out the video above for a little taster of what the day was like!

Ignore the knuckle tattoos, I’m not really a thug

Now, as you can see, I’m a pretty recognisable person so if you spot me at a conference, festival or hackathon please don’t hesitate to come and say hello! Perhaps you’ll spot me as part of a motley crew, all sporting MOD Devices T-shirts. Come and meet the team!

A captivated audience at our MOD Duo demo session

Waves hackathon attendees loved the MOD Duo, and after our demonstrations we lent out Duos to teams & individuals from all backgrounds, disciplines, and parts of the world. There was such a talented group of hackers in attendance at the event and the Duo ended up becoming an integral part of many of the amazing projects that were created in just a single day.

Sweet Spotting by Johannes Wernicke

As you can see from Johannes Wernicke’s project Sweet Spotting the MOD Duo managed to find it’s way into some projects of full “Mad Scientist” calibre. Johannes constructed a device which utilised an Acouspade ultrasonic directional speaker array, microphone, Kinect camera and a Duo to create an augmented-reality sonic environment. Thanks to a motorised mount, it follows a listener around and consistently aims the processed sounds of the space around them directly at that one person, creating a bubble of beautiful sonic impossibility to be enjoyed by one person at a time.

My hack utilised a MOD Duo, Novation Circuit & Numark ORBIT

My project at the Waves Music Hackday utilised a MOD Duo, which was processing the sound of my microphone as well as the audio output from a Novation Circuit sequencer & synthesizer. The MIDI output of one of the channels on the Circuit was also controlling a synthesizer, vocoder and autotune pedals within the Duo. The Numark ORBIT is a wireless MIDI controller with multiple banks of buttons, dials, and perhaps most importantly a 2-axis gyroscope. I strapped the controller to my wrist and used the large rotary encoder and the gyro output to control the parameters of my pedalboard.

My performance at the end of the hackathon

Naturally, I then donned a pair of EEG-controlled animatronic wiggling ears, fired up some audio-reactive visuals created in Max MSP and performed one of my songs. Now, I’m a big fan of vocal effects & processing (especially vocoding & autotune) and having the freedom to use those types of effects without processing the microphone via a combo of audio interface & laptop like I usually would at my gigs was really refreshing. Both the Novation Circuit & Numark ORBIT are class-compliant MIDI devices, which meant I was able to connect them both via a USB hub and utilise their output within my MOD pedalboard with ease.

The winners of our ‘Gesture to Sound’ challenge

The winners of our ‘Gesture to Sound’ challenge were Richard Vogl & Daniel Hütter for their project Metal Stance, which enables guitarists to control elements of their MOD pedalboard by changing their pose or stance. Their demonstration performance was incredibly engaging and the audience clearly understood the relationship between the musician’s body movements and the sounds being produced. Richard & Daniel won a MOD Duo for their amazing work on this great project!

I’m more used to a 24-hour hackathon format, such as those at Music Tech Fest and most other Music Hackday events, but this one-day event really blew me away. It didn’t feel like an 8-hour hackathon, it felt like the final 8 hours of a longer hackathon because all of the hackers just knuckled down and started creating amazing things right from the beginning. It was amazing to see all of the fantastic stuff that people came up with - whether they used a MOD Duo in their projects or not. Thanks to everyone who attended, and for anyone who wants a chance to hack a MOD Duo, our next stop is Music Hackday Berlin - I hope we’ll see you there, and in the meantime keep making music, keep loving life, & keep enjoying your MOD Duo!

  • Adam @ MOD HQ

October 14, 2016 05:20 AM

October 12, 2016

Libre Music Production - Articles, Tutorials and News

Linux Show Player 0.4.1 released

Linux Show Player 0.4.1 released

A new version of Linux Show Player is now available.

What's new:

  •  Add:Translation support (currently: English, Italian, Spanish, Slovenian) [! Help wanted !]
  •  Update: UI Improvements, in settings dialogs
  •  Minor improvements & fixes

Linux Show Player (or LiSP for short) is a free cue player designed for sound-playback in stage production.
The goal of the project is to provide a complete playback software for musical plays, theater shows and similar.

by yassinphilip at October 12, 2016 09:12 PM

October 11, 2016

Libre Music Production - Articles, Tutorials and News

Guitarix 0.35.2 released

Guitarix 0.35.2 released

Guitarix 0.35.2 has just been released. The changelog for this release is as follows -

by Conor at October 11, 2016 11:35 AM

New open source project, Flo's Audio Plugins, bring flexible cabinet simulation

New open source project, Flo's Audio Plugins, bring flexible cabinet simulation

There is a new suite of open source (GPLv3) plugins on the block, Flo's Audio Plugins. The suite currently consists of 3 miked cabinet simulation plugins, based on various freely available impulse response collections.

by Conor at October 11, 2016 10:56 AM

MOD Devices Blog

Pre-order shipping update

Good news, music maestros! If you’re one of the people who have already placed a pre-order for a MOD Duo, your wisdom & foresight will soon be rewarded - the shipment of pre-ordered units has now begun! Who else is excited? I certainly am. The first units from this batch will be off to begin their new lives in your studios, stages & gig-bags this friday, and the remaining units for all current pre-orders will be on their way next week.

Assembly & testing has been going on at our Berlin headquarters with a level of efficiency & attention-to-detail that perhaps only the skilled team of a German electronic engineering company like Schleicher could provide.

Jess assembling MOD Duos at Schleicher

Jess from Schleicher is seen here assembling some Duos, and has been described by MOD boss-man Gianfranco (a.k.a. “The MODfather”) as an electronic artisan.

falkTX deploying your new MOD Duos

Some Duos being set up & tested through the deploy machine by our talented colleague Filipe Coelho, a.k.a. falkTX and known throughout the Linux music community for his tireless work as the creator of the KXStudio distribution and many amazing Linux audio applications. Your new MOD Duo has come from the hands of master craftsmen!

To all of our backers from Kickstarter, plugin developers, pedalboard sharers, and all the lucky people out there that have already been enjoying a MOD Duo, thank you so much for being part of the wonderful community that has been creating and sharing amazing content - a community to which we’re about to welcome a whole bunch of new members. All the lucky Duo newbies will benefit from access to the pedalboards already shared by other musicians and by having the Duo in so many sets of talented new hands, we’ll all discover & share even more amazing ways to get the sounds from inside our minds out into the real world.

Keep making music, keep loving life, & keep enjoying your MOD Duo (or get ready to start enjoying your new MOD Duo) - Adam @ MOD HQ

October 11, 2016 05:20 AM