planet.linuxaudio.org

September 30, 2016

GStreamer News

GStreamer Core, Plugins, RTSP Server, Editing Services, Python, Validate, VAAPI, OMX 1.10.0 release candidate 1 (1.9.90)

The GStreamer team is pleased to announce the first release candidate of the stable 1.10 release series. The 1.10 release series is adding new features on top of the 1.0, 1.2, 1.4, 1.6 and 1.8 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework.

Binaries for Android, iOS, Mac OS X and Windows will be provided in the next days.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

September 30, 2016 10:00 AM

September 29, 2016

Libre Music Production - Articles, Tutorials and News

John Option release new song, "Lifestyle obsession"

John Option have just released a new song called "Lifestyle obsession", accompanied as always by a video. Their new song, as with all of John Option releases, is published under the terms of the Creative Commons License Attribution Share Alike.

by Conor at September 29, 2016 11:10 AM

AMSynth sees new release

AMSynth sees new release

AMSynth version 1.7.0 has just been released. AMSynth is an open source realtime software synthesizer for Linux. Its operation is similar to analog Moog Minimoog and Roland Juno-60, which are considered classic synthesizers from the 1970s. It is available in various formats, including LV2 and DSSI plugins as well as JACK/ALSA standalone clients.

Changes in this release include -

by Conor at September 29, 2016 11:02 AM

September 28, 2016

ardour

Ardour Solar Powered Development

Some Ardour users and other interested readers may be intrigued to know that almost all of Ardour's lead developer's work on the program right now is powered by photovoltaic panels. 540W panel capacity and 510Ah of lead-acid AGM batteries generally provides enough power to keep a 4 core i7-3370US 3.10GHz system running, with occasional use of a Mac Mini. It also runs the fridge, lighting, music and pump systems in the Sprinter van that he and his wife are living in until June 2017. With the exception of the Mac Mini, all other computing equipment is 12V DC native (no power bricks), including the 40W Topping amplifier used to power a pair of Micca monitors.

It is unlikely that the solar power available in the UK during the winter (where the van will be located) will be enough to keep things running, so at some point reverting to a cable and mains power seems likely. But for now, all of Paul's work is powered by the sun. Welcome to the future!

(ps. as a footnote, the other systems intimately involved in Ardour development, while connected to the grid, are also powered via contracts that guarantee 100% renewable energy sources.)

read more

by paul at September 28, 2016 10:16 PM

September 25, 2016

digital audio hacks – Hackaday

Sending Music Long Distance Using A Laser

This isn’t the first time we’ve seen DIYers sending music over a laser beam but the brothers [Armand] and [Victor] are certainly in contention for sending the music the longest distance, 452 meter/1480 feet from their building, over the tops of a few houses, through a treetop and into a friend’s apartment. The received sound quality is pretty amazing too.

In case you’ve never encountered this before, the light of the laser is modulated with a signal directly from the audio source, making it an analog transmission. The laser is a 250mW diode laser bought from eBay. It’s powered through a 5 volt 7805 voltage regulator fed by a 12V battery. The signal from the sound source enters the circuit through a step-up transformer, isolating it so that no DC from the source enters. The laser’s side of the transformer feeds the base of a transistor. They included a switch so that the current from the regulator can either go through the collector and emitter of the transistor that’s controlled by the sound source, giving a strong modulation, or the current can go directly to the laser while modulation is provided through just the transistor’s base and emitter. The schematic for the circuit is given at the end of their video, which you can see after the break.

They receive the beam in their friend’s apartment using solar cells, which then feed a fairly big amplifier and speakers. From the video you can hear the surprisingly high quality sounds that results. So check it out. It also includes a little Benny Hill humor.

And when have we seen laser communication before? Why yours truly demonstrated a shorter range transmission using a dollar store pet toy laser sending to a solar cell and homemade amplifier. If you want to dig deeper, Gigabit laser Ethernet is the one for you.


Filed under: digital audio hacks, laser hacks

by Steven Dufresne at September 25, 2016 05:01 PM

September 24, 2016

digital audio hacks – Hackaday

Now is the Golden Age of Artisanal, Non-Traditional Tube Amps

Earlier in the month, [Elliot Williams] quipped that it had been far too long since we saw a VFD-based amplifier build. Well, that dry spell is over. This week, [kodera2t] started showing off his design for a VFD headphone amp.

Here’s the thing, this isn’t using old surplus vacuum fluorescent displays. This is actually a new part. We first covered it about 18 months ago when Korg and Noritake announced the NuTube. It’s the VFD form factor you would find in old stereo and lab equipment, but housed in the familiar glass case is a triode specifically designed for that purpose.

Check out [kodera2t’s] video below where he walks through the schematic for his amplifier. Since making that video he has populated the boards and taken it for a spin — no video of that yet but we’re going to keep a watchful eye for a follow-up. Since these parts can be reliably sourced he’s even planning to sell it in his Tindie store. If you want to play around with this new tube that’s a pretty easy way to get the tube and support hardware all in one shot. This is not a hack, it’s being used for exactly what Korg and Noritake designed it to do, but we hope to see a few of these kits hacked for specific tastes in amp design. If you do that (or any other VFD hacking) we want to hear about it!

And now for the litany of non-traditional VFD amps we’ve grown to love. There is the Nixie amp where [Elliot] made the quip I mentioned above, here’s an old radio VFD amp project, in this one a VCR was the donor, and this from wayback that gives a great background on how this all works.


Filed under: classic hacks, digital audio hacks, slider

by Mike Szczys at September 24, 2016 11:01 AM

September 21, 2016

rncbc.org

Qtractor 0.7.9 - A Snobbier Graviton release


So it's last equinox'16...

And the ultimate last of the Qstuff* End of Summer'16 release parties.

Qtractor 0.7.9 (snobbier graviton) is now released!

Release highlights:

  • Audio/MIDI metronome anticipatory offset (NEW)
  • Current clip highlighting (NEW)
  • SFZ sample file archive/zip bundling (NEW)
  • MIDI transpose Reverse tool (NEW)
  • MIDI (N)RPN running status and NULL support (NEW)
  • MIDI Controllers catch-up algorithm (FIX)
  • MIDI track Instrument menu (FIX)
  • JACK shutdown and buffer-size changes (FIX)

Qtractor is an audio/MIDI multi-track sequencer application written in C++ with the Qt framework. Target platform is Linux, where the Jack Audio Connection Kit (JACK) for audio and the Advanced Linux Sound Architecture (ALSA) for MIDI are the main infrastructures to evolve as a fairly-featured Linux desktop audio workstation GUI, specially dedicated to the personal home-studio.

Website:

http://qtractor.sourceforge.net

Project page:

http://sourceforge.net/projects/qtractor

Downloads:

http://sourceforge.net/projects/qtractor/files

Git repos:

http://git.code.sf.net/p/qtractor/code
https://github.com/rncbc/qtractor.git
https://gitlab.com/rncbc/qtractor.git
https://bitbucket.org/rncbc/qtractor.git

Wiki (help wanted!):

http://sourceforge.net/p/qtractor/wiki/

License:

Qtractor is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

And the boring complete change-log follows:

  • JACK buffer-size change handling has been deeply improved, now doing an immediate session restart, while preserving all external connections as much as possible.
  • Introducing an audio and MIDI metronome anticipatory offset, kind of latency compensation, to respective option settings cf. View/Options.../Audio, MIDI/Metronome/Offset (latency).
  • Fixed LADSPA plug-in preset switching, incidentally broken as NOP, ever since late Haziest Photon's crash-landed.
  • MIDI Track/Instrument cascading menus have been found empty broken on Qt5 builds, now fixed.
  • MIDI RPN/NRPN running status and RPN NULL reset command are now supported (input only).
  • Fixed a sure immediate crash on removing audio buses that are current targets of any active Aux-send inserts.
  • Fixed yet another old bummer that was reaping off assigned MIDI controllers on existing track's gain/volume or panning controls, when adding any single new track.
  • Fixed missing feedback on MIDI controllers assigned to any of monitor, record, mute and solo track/bus state buttons.
  • Eye-candy warning: the current clip, not necessarily the one currently selected, is now highlighted with a solid outline; linked MIDI clips are also highlighted with an alternate dashed outline.
  • SFZ file conversion, and bundling of the respective sample files, is now supported when saving as zip/archive (*.qtz).
  • Fixed track monitor, record, mute and solo dangling states, on Track/Duplicate command.
  • Slight regression on the LV2 State Files abstract/relative file-path mapping, trading QFileInfo::canonicalFilePath() for QFileInfo::absoluteFilePath(), and thus skipping all symlink dereferences in the process.
  • Fixed a one first linking/ref-counting glitch, affecting recently recorded MIDI clips which might have their initial clip length still un-quantized to MIDI resolution (BBT).
  • A brand new and discrete MIDI clip editor command tool has been added: MIDI Tools/Transpose/Reverse.
  • Discretely fixed MIDI Controllers catch-up algorithm.
  • Fixed a borderline mistake on plug-in parameter port index mapping to its corresponding symbolic name, especially if newer plug-in versions are loaded on older saved sessions.

Flattr this

 

Enjoy && Have (lots of) fun.

by rncbc at September 21, 2016 05:00 PM

Libre Music Production - Articles, Tutorials and News

Qtractor 0.7.9 - A Snobbier Graviton release

Qtractor 0.7.9 - A Snobbier Graviton release

Rui Nuno Capela continues his end of summer release frenzy. This time around he is pushing out Qtractor 0.7.9.

Release highlights include -

by Conor at September 21, 2016 03:50 PM

September 20, 2016

drobilla.net

Sratom 0.6.0

sratom 0.6.0 has been released. Sratom is a library for serialising LV2 atoms to/from RDF, particularly the Turtle syntax. For more information, see http://drobilla.net/software/sratom.

Changes:

  • Add sratom_set_env() for setting prefixes
  • Fix padding of constructed vectors (thanks Hanspeter Portner)
  • Support round-trip serialisation of relative paths
  • Support sequences with beat time stamps
  • Fix warnings when building with ISO C++ compilers
  • Upgrade to waf 1.8.14

by drobilla at September 20, 2016 02:25 AM

Lilv 0.24.0

lilv 0.24.0 has been released. Lilv is a C library to make the use of LV2 plugins as simple as possible for applications. For more information, see http://drobilla.net/software/lilv.

Changes:

  • Add new hand-crafted Pythonic bindings with full test coverage
  • Add lv2apply utility for applying plugins to audio files
  • Add lilv_world_get_symbol()
  • Add lilv_state_set_metadata() for adding state banks/comments/etc (based on patch from Hanspeter Portner)
  • Fix crash when state contains non-POD properties
  • Fix crash when NULL predicate is passed to lilv_world_find_nodes()
  • Fix state file versioning
  • Unload contained resources when bundle is unloaded
  • Do not instantiate plugin when data fails to parse
  • Support re-loading plugins
  • Replace bundles if bundle with newer plugin version is loaded (based on patch from Robin Gareus)
  • Fix loading dyn-manifest from bundles with spaces in their path
  • Check lv2:binary predicate for UIs
  • Add LILV_URI_ATOM_PORT and LILV_URI_CV_PORT defines
  • Fix documentation installation
  • Fix outdated comment references to lilv_uri_to_path()

by drobilla at September 20, 2016 02:24 AM

September 19, 2016

rncbc.org

Vee One Suite 0.7.6 - The Eleventh beta release


Hello again!

The Vee One Suite aka. the gang of three old-school homebrew software instruments, respectively synthv1, as a polyphonic subtractive synthesizer, samplv1, a polyphonic sampler synthesizer and drumkv1 as yet another drum-kit sampler, are here released on their eleventh beta iteration, joining the so called Qstuff* End of Summer'16 release frenzy.

All still available in dual form:

  • a pure stand-alone JACK client with JACK-session, NSM (Non Session management) and both JACK MIDI and ALSA MIDI input support;
  • a LV2 instrument plug-in.

The common change-log says:

  • MIDI RPN/NRPN running status and RPN NULL reset command are now supported (input only).
  • The core engine implementation is now delivered as a shared object library, common to both the JACK stand-alone client and the LV2 instrument plug-in.
  • Discretely fixed MIDI Controllers catch-up algorithm.

The Vee One Suite are free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

And here they come again!

synthv1 - an old-school polyphonic synthesizer

synthv1 0.7.6 (eleventh official beta) released!

synthv1 is an old-school all-digital 4-oscillator subtractive polyphonic synthesizer with stereo fx.

LV2 URI: http://synthv1.sourceforge.net/lv2

website:
http://synthv1.sourceforge.net

downloads:
http://sourceforge.net/projects/synthv1/files

git repos:
http://git.code.sf.net/p/synthv1/code
https://github.com/rncbc/synthv1.git
https://gitlab.com/rncbc/synthv1.git
https://bitbucket.org/rncbc/synthv1.git

Flattr this

samplv1 - an old-school polyphonic sampler

samplv1 0.7.6 (eleventh official beta) released!

samplv1 is an old-school polyphonic sampler synthesizer with stereo fx.

LV2 URI: http://samplv1.sourceforge.net/lv2

website:
http://samplv1.sourceforge.net

downloads:
http://sourceforge.net/projects/samplv1/files

git repos:
http://git.code.sf.net/p/samplv1/code
https://github.com/rncbc/samplv1.git
https://gitlab.com/rncbc/samplv1.git
https://bitbucket.org/rncbc/samplv1.git

Flattr this

drumkv1 - an old-school drum-kit sampler

drumkv1 0.7.6 (eleventh official beta) released!

drumkv1 is an old-school drum-kit sampler synthesizer with stereo fx.

LV2 URI: http://drumkv1.sourceforge.net/lv2

website:
http://drumkv1.sourceforge.net

downloads:
http://sourceforge.net/projects/drumkv1/files

git repos:
http://git.code.sf.net/p/drumkv1/code
https://github.com/rncbc/drumkv1.git
https://gitlab.com/rncbc/drumkv1.git
https://bitbucket.org/rncbc/drumkv1.git

Flattr this

Enjoy && have (lots of) fun ;)

by rncbc at September 19, 2016 05:00 PM

Libre Music Production - Articles, Tutorials and News

Vee One Suite 0.7.6 released

 Vee One Suite 0.7.6 released

Days after his Qstuff* end of Summer'16 release frenzy, Rui Nuno Capela releases the Eleventh beta release of his Vee One Suite, version 0.7.6. This suite of plugins includes -

by Conor at September 19, 2016 04:35 PM

open-source – CDM Create Digital Music

MeeBlip triode synth gets even bigger bass

Our MeeBlip synth is back. It’s still a tiny box you can add to a synth setup. It’s still just US$139.95. But now, it packs some improved features – and bigger-than-ever bass.

The most important thing I can tell you about this is, when you flip the “sub” switch on and enable its new third oscillator, its bass sound is simply enormous.

And that makes me really glad to share it with you, the latest fruits of CDM’s collaboration with engineer James Grahame — the brains behind MeeBlip.

James has selected some sounds I made with it. A few seconds into that first sound, I power up that sub oscillator. You’ll need something other than laptop speakers to hear.

We sold out of the Triode’s award-winning predecessor, the MeeBlip anode. So it’s been impossible to get a MeeBlip for a few months unless you were buying second-hand.

But if you missed out, you’ve got a second chance with Triode. And there are some improvements – apart from just the red color.

NEW sub oscillator
NEW red color
NEW 8 additional custom wavetables, for 24 in total
Tuned envelopes for more response
Front-panel glide
MIDI control of analog filter resonance

All of this digital grunge is combined with the same Twin-T analog filter from the anode. It’s a vintage filter design intended for things like guitar pedals, which adds aggressive resonance to your synth sound.

And you can now add Triode alongside other stuff you might find useful as a mobile musician – like our new BlipCase (which is designed to fit instruments like the Korg volca series), and an excellent driver-free USB MIDI interface.

When we started developing the MeeBlip project, there really weren’t compact MIDI synths you could get for this price. But every time I switch the MeeBlip on in my studio, I’m reminded of why I believe in this project. Apart from the fact that the MeeBlip remains open source hardware – every circuit, every line of code – it’s still an instrument with a personality all its own. There’s nothing dirty in quite the same way. And when you need a box to add something grimy and heavy on top of all the other wonderful toys we’ve got, it’s there for you.

In stock.

Shipping now, worldwide, direct from us – hand-tested by the engineer at his studio in Calgary, Canada.

http://meeblip.com

MeeBlip_Anastasia_Muna

meeblip_triode_beautyshot

triode_top

triode_back

http://meeblip.com

The post MeeBlip triode synth gets even bigger bass appeared first on CDM Create Digital Music.

by Peter Kirn at September 19, 2016 02:37 PM

September 17, 2016

Pid Eins

systemd.conf 2016 Workshop Tickets Available

Tickets for systemd 2016 Workshop day still available!

We still have a number of ticket for the workshop day of systemd.conf 2016 available. If you are a newcomer to systemd, and would like to learn about various systemd facilities, or if you already know your way around, but would like to know more: this is the best chance to do so. The workshop day is the 28th of September, one day before the main conference, at the betahaus in Berlin, Germany. The schedule for the day is available here. There are five interesting, extensive sessions, run by the systemd hackers themselves. Who better to learn systemd from, than the folks who wrote it?

Note that the workshop day and the main conference days require different tickets. (Also note: there are still a few tickets available for the main conference!).

Buy a ticket here.

See you in Berlin!

by Lennart Poettering at September 17, 2016 10:00 PM

September 15, 2016

GStreamer News

GStreamer Conference 2016: Last chance for early-bird discount on tickets

This is a quick reminder that registration for the GStreamer conference 2016 is open, and if you register today you can still benefit from the discounted early-bird registration fee, which is only available until Thursday 15 September 2016 (inclusive). After that the registration fee for professional tickets will rise to 340 EUR.

Register now for the GStreamer Conference!

GStreamer Conference 2016 Berlin

About the GStreamer Conference

The GStreamer Conference 2016 will take place on 10-11 October 2016 in Berlin (Germany), and will take place in the same week as the Embedded Linux Conference Europe. More information and details how to register can be found on the conference website.

September 15, 2016 10:00 AM

September 14, 2016

rncbc.org

The QStuff* End of Summer'16 Release

Howdy!

Modesty on the side, this is the ultimate Qstuff* End of Summer'16 release frenzy.

Nothing less than the following gems:

are now released to the masses.

Enjoy and have (lots of) fun!

 

QjackCtl - JACK Audio Connection Kit Qt GUI Interface

QjackCtl 0.4.3 (end of summer'16) released!

QjackCtl is a(n ageing but still) simple Qt application to control the JACK sound server, for the Linux Audio infrastructure.

Website:
http://qjackctl.sourceforge.net
Project page:
http://sourceforge.net/projects/qjackctl
Downloads:
http://sourceforge.net/projects/qjackctl/files

Git repos:

http://git.code.sf.net/p/qjackctl/code
https://github.com/rncbc/qjackctl.git
https://gitlab.com/rncbc/qjackctl.git
https://bitbucket.com/rncbc/qjackctl.git

Change-log:

  • Fix build error caused by variable length array.
  • Fix some tooltip spelling (patch by Jaromír Mikeš, thanks).
  • Translation (not) fix for the default server name "(default)".
  • Old "Start minimized to system tray" option returns to setup.
  • Dropped the --enable-qt5 from configure as found redundant given that's the build default anyway (suggestion by Guido Scholz, while for Qtractor, thanks).
  • Late again French (fr) translation update (by Olivier Humbert aka. trebmuh, thanks).

Flattr this

 

Qsynth - A fluidsynth Qt GUI Interface

Qsynth 0.4.2 (end of summer'16) released!

Qsynth is a FluidSynth GUI front-end application written in C++ around the Qt framework using Qt Designer.

Website:
http://qsynth.sourceforge.net
Project page:
http://sourceforge.net/projects/qsynth
Downloads:
http://sourceforge.net/projects/qsynth/files

Git repos:

http://git.code.sf.net/p/qsynth/code
https://github.com/rncbc/qsynth.git
https://gitlab.com/rncbc/qsynth.git
https://bitbucket.com/rncbc/qsynth.git

Change-log:

  • Old "Start minimized to system tray" option returns to setup.
  • Dropped the --enable-qt5 from configure as found redundant given that's the build default anyway (suggestion by Guido Scholz, while for Qtractor, thanks).

Flattr this

 

Qsampler - A LinuxSampler Qt GUI Interface

Qsampler 0.4.1 (end of summer'16) released!

Qsampler is a LinuxSampler GUI front-end application written in C++ around the Qt framework using Qt Designer.

Website:
http://qsampler.sourceforge.net
Project page:
http://sourceforge.net/projects/qsampler
Downloads:
http://sourceforge.net/projects/qsampler/files

Git repos:

http://git.code.sf.net/p/qsampler/code
https://github.com/rncbc/qsampler.git
https://gitlab.com/rncbc/qsampler.git
https://bitbucket.com/rncbc/qsampler.git

Change-log:

  • Fixed a race condition on creating sampler channels that ended in duplicate channel strips; also fixed channel auto-arrange.
  • Dropped the --enable-qt5 from configure as found redundant given that's the build default anyway (suggestion by Guido Scholz, while for Qtractor, thanks).
  • Automake: set environment variable GCC_COLORS=auto to allow GCC to auto detect whether it (sh/c)ould output its messages in color.

Flattr this

 

QXGEdit - A Qt XG Editor

QXGEdit 0.4.1 (end of summer'16) released!

QXGEdit is a live XG instrument editor, specialized on editing MIDI System Exclusive files (.syx) for the Yamaha DB50XG and thus probably a baseline for many other XG devices.

Website:
http://qxgedit.sourceforge.net
Project page:
http://sourceforge.net/projects/qxgedit
Downloads:
http://sourceforge.net/projects/qxgedit/files

Git repos:

http://git.code.sf.net/p/qxgedit/code
https://github.com/rncbc/qxgedit.git
https://gitlab.com/rncbc/qxgedit.git
https://bitbucket.com/rncbc/qxgedit.git

Change-log:

  • Dropped the --enable-qt5 from configure as found redundant given that's the build default anyway (suggestion by Guido Scholz, while for Qtractor, thanks).

Flattr this

 

QmidiCtl - A MIDI Remote Controller via UDP/IP Multicast

QmidiCtl 0.4.1 (end of summer'16) released!

QmidiCtl is a MIDI remote controller application that sends MIDI data over the network, using UDP/IP multicast. Inspired by multimidicast (http://llg.cubic.org/tools) and designed to be compatible with ipMIDI for Windows (http://nerds.de). QmidiCtl has been primarily designed for the Maemo enabled handheld devices, namely the Nokia N900 and also being promoted to the Maemo Package repositories. Nevertheless, QmidiCtl may still be found effective as a regular desktop application as well.

Website:
http://qmidictl.sourceforge.net
Project page:
http://sourceforge.net/projects/qmidictl
Downloads:
http://sourceforge.net/projects/qmidictl/files

Git repos:

http://git.code.sf.net/p/qmidictl/code
https://github.com/rncbc/qmidictl.git
https://gitlab.com/rncbc/qmidictl.git
https://bitbucket.com/rncbc/qmidictl.git

Change-log:

  • Dropped the --enable-qt5 from configure as found redundant
    given that's the build default anyway (suggestion by Guido
    Scholz, while for Qtractor, thanks).

Flattr this

 

QmidiNet - A MIDI Network Gateway via UDP/IP Multicast

QmidiNet 0.4.1 (end of summer'16) released!

QmidiNet is a MIDI network gateway application that sends and receives MIDI data (ALSA-MIDI and JACK-MIDI) over the network, using UDP/IP multicast. Inspired by multimidicast and designed to be compatible with ipMIDI for Windows.

Website:
http://qmidinet.sourceforge.net
Project page:
http://sourceforge.net/projects/qmidinet
Downloads:
http://sourceforge.net/projects/qmidinet/files

Git repos:

http://git.code.sf.net/p/qmidinet/code
https://github.com/rncbc/qmidinet.git
https://gitlab.com/rncbc/qmidinet.git
https://bitbucket.com/rncbc/qmidinet.git

Change-log:

  • Dropped the --enable-qt5 from configure as found redundant given that's the build default anyway (suggestion by Guido Scholz, while for Qtractor, thanks).

Flattr this

 

License:

All of the Qstuff* are free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

 

Enjoy && keep the fun, always!

by rncbc at September 14, 2016 07:00 PM

Libre Music Production - Articles, Tutorials and News

QStuff* End of Summer'16 Release

Fans of Rui Nuno Capela's Qstuff* take heed, he has just released the following end of summer updates to his suite of software -

  • QjackCtl 0.4.3
  • Qsynth 0.4.2
  • Qsampler 0.4.1
  • QXGEdit 0.4.1
  • QmidiCtl 0.4.1
  • QmidiNet 0.4.1

For a full run down of the changelogs, check out Rui's announcement over at rncbc.org

by Conor at September 14, 2016 03:46 PM

GStreamer News

GStreamer Conference 2016: Collabora Platinum Sponsor

The GStreamer project is pleased to welcome back Collabora as Platinum level sponsor at this year's GStreamer Conference in Berlin.

Collabora (https://www.collabora.com) is a consultancy with more than 10 years of experience in open source technologies. As well as employing several core contributors, they have been sponsoring the GStreamer conference for every year since the very first conference.

Thanks Collabora!

Collabora

About the GStreamer Conference

The GStreamer Conference 2016 will take place on 10-11 October 2016 in Berlin (Germany), and will take place in the same week as the Embedded Linux Conference Europe. More information and details how to register can be found on the conference website.

September 14, 2016 01:00 PM

Linux – CDM Create Digital Music

Jamming standard: Ableton is opening Link to everyone, starting today

Ableton Link is coming to desktops, and going completely open source. And that means the best tool for wireless sync and jamming is about to get a lot more popular.

On iOS and for Ableton Live users, Ableton Link is already a revelation. It allows any number of different apps to sync up with one another without fuss. That includes two more machines running Ableton Live, of course. But it could also be two apps on an iPad, or an iPhone and an iPad, or an iPad and a copy of Ableton Live. It completely changes live jamming: instead of needing tech and setup, you only need friends.

And this is what was unique about Ableton Link. Almost from day one, it was something that embraced developers outside Ableton’s own offices.

Well, that’s about to accelerate – a lot. Ableton Link goes from being a tool for Ableton Live that happens to have an iOS mobile SDK to a lot more. You can actually look at this as several things happening at once.

Ableton Link is desktop-ready. There’s now a complete desktop SDK available on GitHub, complete with example apps for Windows, macOS, and Linux.

Ableton Link is open source, free software. All the source code for Ableton Link is available on GitHub. (It’s written in C++.) It’s also liberally licensed, under a GPLv2 license – free as in freedom. And if you do want to build proprietary software, there’s a licensing option. (There’s more to discuss here for those of us in the free software community as far as license compatibility, but I’m also less worried about that precisely because I feel the team at Ableton are flexible enough to have a discussion if the legal license itself doesn’t answer a question.)

Meet

Meet “other platforms.”

There are desktop partners – Propellerhead, Cycling ’74, and Serato. Um, wow. Not only are these the developers of three flagship apps, but they each represent essential music making communities (the Reason, Max, and Serato DJ communities being some of the most passionate anywhere). And they mean the launch partnership covers three categories of tools (a music studio, a DIY music toolkit, and a DJ app).

And each has been involved in various kinds of innovation. Propellerhead have played a key role in the evolution of the ideas we have today about software as instruments, as well as how software could interoperate (with ReWire). Max/MSP has been an environment where new ideas in music software often emerge, and was even the playground used by the founders of Ableton before they founded Ableton. And Serato is notable because they helped contribute to how sync works in Live today. (The planned integration for The Bridge having failed is itself significant; I think these days, we’d be happy just to have simple sync and not worry about something so over-ambitious.)

Obviously, more will follow. I’m disappointed not to see Native Instruments here, for instance, as I think being involved is important to NI’s stated mission of pushing standards.

Serato joins Ableton. All photos courtesy Ableton.

Serato joins Ableton. All photos courtesy Ableton.

The iOS SDK has also been updated, and will continue to grow. There’s a 2.0 SDK, improved example apps, and of course Link is becoming a standard in iOS tools that use sync.

More platforms can follow. Now, here’s where things get interesting. Linux support means all kinds of unique platforms, like the Raspberry Pi. (The Link team has already tested a RasPi; I will, too, for sure.) That opens up sync-able hardware. And while there’s no official Android SDK or example apps, I’m certain we’ll see some intrepid Android developers make their own in a hurry – there’s already everything they need in the SDK.

Just making something open source doesn’t magically make stuff happen. (Trust me on this. Apart from using open source tools every single day, I’ve been involved in the management of both open source hardware and software.) So this isn’t a “build this and they will come” sort of deal. And that’s why I’m excited by the team at Ableton working on this. Not only did they create the best technology in the business for sync and jamming, but I trust them to manage this as an open source tool. Florian Goltz, with whom CDM spoke on background for this article, is now Link Open Source Project Owner, and Michaela Buergle remains Link Product Owner. (Michaela was I think one of the most eloquent speakers at Loop, which is important – making technology successful is not just an engineering problem, but a communication problem, as well.)

abletonlink_workspace

Now, having heaped that praise on Ableton, I think the next step is up to us. We have to build interesting apps with this tech, and find ways of playing with tools and with each other to make better music. I also hope those of us advocating open source software and education (cough, uh, like me) can find ways of helping people realize their own ideas for new tools with this platform.

For users:
https://www.ableton.com/en/link/

For developers:
https://ableton.github.io/link/

Find software:
https://www.ableton.com/en/link/apps/

The post Jamming standard: Ableton is opening Link to everyone, starting today appeared first on CDM Create Digital Music.

by Peter Kirn at September 14, 2016 11:18 AM

Libre Music Production - Articles, Tutorials and News

Ardour web presence: developer wanted

The Ardour project is looking for help with development in relation to the projects websites. Ardour currently uses 5 independent web sites, the main site, forums, manual, bug tracker and nightly builds sites, and they are looking for someone to help them with the following tasks -

by Conor at September 14, 2016 06:19 AM

ardour

Ardour web presence: developer wanted

Ardour is an open source application for recording, editing and music music & sound for Linux, OS X and Windows. The project started over 16 years ago, and has seen contributions from more than 70 programmers. It recently reached its 5.0 release milestone. It generates sufficient revenue to pay its lead developer a reasonable salary, as well as covering all costs associated with its network presence. We're looking for someone to do a little bit of website administration and development for us. Read on below for more details...

read more

by paul at September 14, 2016 01:26 AM

September 12, 2016

digital audio hacks – Hackaday

“Nixie” Tubes Sound Good

A tube is a tube is a tube. If one side emits electrons, another collects them, and a further terminal can block them, you just know that someone’s going to use it as an amplifier. And so when [Asa] had a bunch of odd Russian Numitron tubes on hand, an amplifier was pretty much a foregone conclusion.

A Numitron is a “low-voltage Nixie”, or more correctly a single-digit VFD in a Nixiesque form factor. So you could quibble that there’s nothing new here. But if you dig into the PDF writeup, you’ll find that the tubes have been very nicely characterised, situating this project halfway between dirty hack and quality lab work.

It’s been a while since we’ve run a VFD-based amplifier project, but it’s by no means the first time. Indeed, we seem to run one every couple years. For instance, here is a writeup from 2010, and the next in 2013. Extrapolating forward, you’re going to have to wait until 2019 before you see this topic again.


Filed under: digital audio hacks, misc hacks

by Elliot Williams at September 12, 2016 02:00 AM

September 05, 2016

GStreamer News

GStreamer Conference 2016: Registration now open

About the GStreamer Conference

The GStreamer Conference 2016 will take place on 10-11 October 2016 in Berlin (Germany), and will take place in the same week as the Embedded Linux Conference Europe.

It is a conference for developers, decision-makers, and anyone else interested in the GStreamer multimedia framework and open source multimedia.

Registration now open

You can now register for the GStreamer Conference 2016 via the conference website.

September 05, 2016 09:00 AM

September 04, 2016

GStreamer News

Orc 0.4.26 bug-fix release

The GStreamer team announces another maintenance bug-fix release of liborc, the Optimized Inner Loop Runtime Compiler. Main changes since the previous release:

  • Use 64 bit arithmetic to increment the stride if needed (fixing crashes in certain libgstvideo functions on OS X)
  • Fix generation of ModR/M / SIB bytes for the EBP, R12, R13 registers on X86/X86-64 (fixing crashes in compositor on Windows)
  • Fix test_parse unit test if no executable backend is available
  • Add orc-test path to the -uninstalled .pc file
  • Fix compiler warnings in the tests on OS X

Direct tarball download: orc-0.4.26.

September 04, 2016 10:00 AM

September 01, 2016

GStreamer News

GStreamer Core, Plugins, RTSP Server, Editing Services, Python, Validate, VAAPI, OMX 1.9.2 unstable release

The GStreamer team is pleased to announce the second release of the unstable 1.9 release series, which marks the feature freeze for 1.10. The 1.9 release series is adding new features on top of the 1.0, 1.2, 1.4, 1.6 and 1.8 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework. The unstable 1.9 release series will lead to the stable 1.10 release series in the next weeks. Any newly added API can still change until that point.

Binaries for Android, iOS, Mac OS X and Windows will be provided in the next days.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

September 01, 2016 09:00 AM

August 29, 2016

ardour

Ardour 5.3 released

Ardour 5.3 is almost entirely a bug fix release that corrects a number of issues noticed and corrected since 5.1.

There was no 5.2 release, due to a mistake during the release process.

If you're looking for information about Ardour 5.0, you'll want to read the release notes.

Download  

Read the full details below ...

read more

by paul at August 29, 2016 01:30 AM

August 26, 2016

open-source – CDM Create Digital Music

BlokDust is an amazing graphical sound tool in your browser

Just when you think you’ve tired of browser toys, of novel graphical modular sound thing-a-ma-jigs, then — this comes along. It’s called Blokdust. It’s beautiful. And … it’s surprisingly deep. Not only might you get sucked into playing with it, but thanks to some simply but powerful blocks and custom sample loading, you might even make a track with it. And for nerds, this is all fully free and open source and hipster-JavaScript-coder compliant if you want to toy with the stuff under the hood.

Here’s a teaser to give you a taste:

The tasteful, geometric interface recalls trendy indie games, a playful flat world to explore. The actual geometric representations themselves are a bit obtuse – it seems there was perhaps a missed opportunity to say something functional with the shapes and colors – but it’s easy enough to figure out anyway, and makes for a nice aesthetic experience. (And, indeed, some three decades into visual patcher software, why not play around with making them attractive?)

And then there are the modules. These are indeed simple enough for a first-time musician to play around with, but they sound good enough – and have enough necessary features and novelties – that the rest of you will like them, too.

Crucially, it’s not just some basic synths or pre-built samples. There’s a powerful granular and wavetable sound source, for instance. You can load your own samples into the granular source, and the generative wavetables are actually themselves worth giving this a go. (They’re really delightful. Try not to smile while messing about.)

And you can use a microphone. And there are a dozen clever effects, including a convolution reverb (with Teufelsberg impulse, no less).

You can play with MIDI (thanks, Chrome) or a computer keyboard, but there are also a section of automatic triggers the developers call “power.” These include particle emitters and the like, and they seem in fact the best opportunity for open source development, because they could take this all in some new directions.

In fact, really the only disappointment here is that there’s not a whole lot of advantage to running in a browser, apart from this being free. Sure, there’s a share feature, but this is nothing that couldn’t be in a standalone app – and you lose out on touch interactions since it’s built for desktop Chrome, unless you have capable hardware.

As design experiment, though, it’s brilliant. And you could still use a third-party audio recorder to capture sound, thus making this a real sketchpad.

I’m very interested to see where this might go. It’s perhaps the most compelling use of browser audio yet, through sheer force of the intelligence of the interaction design, looks, and sound.

The project is developed by Luke Twyman (Whitevinyl), Luke Phillips (Femur Design) and Ed Silverton. It’s made in the UK – Brighton to be exact – with Tone.js and of course the Web Audio API. And yes, it works best in Chrome. (Come on, Apple and Microsoft.)

Try it yourself:

https://blokdust.com/

That library (good stuff):

https://github.com/Tonejs/Tone.js/

Genius work – congrats, lads.

The post BlokDust is an amazing graphical sound tool in your browser appeared first on CDM Create Digital Music.

by Peter Kirn at August 26, 2016 04:51 PM

August 25, 2016

digital audio hacks – Hackaday

Seeed Studio’s ReSpeaker Speaks All the Voice Recognition Languages

Seeed Studio recently launched its third Kickstarter campaign: ReSpeaker, an open hardware voice interface. After their previous Kickstarted IoT hardware, such as the RePhone, mostly focused on connectivity, the electronics manufacturer from Shenzhen now tackles another highly contested area of IoT: Voice recognition.

The ReSpeaker Core is a capable development board based on Mediatek’s MT7688 WiFi module and runs OpenWrt. Onboard is a WM8960 stereo audio codec with integrated 1W speaker/headphone driver, a microphone, an ATMega32U4 coprocessor, 12 addressable RGB LEDs and 8 touch sensors. There are also two expansion headers with GPIOs, I2S, I2C, analog audio and USB 2.0 and an onboard microSD card slot.

The latter is especially useful to feed the ReSpeaker’s integrated speech recognition engine PocketSphinx with a vocabulary and audio file library, enabling it to respond to keywords and commands even when it’s not hooked up to the internet. Once it’s online, ReSpeaker also supports most of the available cloud based cognitive speech recognition services, such as Microsoft Cognitive Service, Amazon Alexa Voice Service, Google Speech API, Wit.ai and Houndify. It also comes with an SDK and Python API, supports JavaScript, Lua and C/C++, and it looks like the coprocessor features an Arduino-compatible bootloader.

respeaker_addons respeaker_micarray

The expansion header accepts shield-like hardware add-ons. Some of them are also available through the campaign. The most important one is the circular, far-field microphone array. Based on 7 XVSM-2000 respeaker_meow2digital microphones, the extension board enhances the device’s hearing with sound localization, beam forming, reverb and noise suppression. A Grove extension board connects the ReSpeaker to the Seeed’s current lineup on ready-to-use sensors, actuators and other peripherals.

Seeed also cooperates with the Meow King Audio Electronic Company to develop a nice tower-shaped enclosure with built-in speaker, 5W amplifier and battery. As a portable speaker, the Meow King Drive Unit (shown on the right) certainly doesn’t knock your socks off, but it practically turns the ReSpeaker into an open source version of the Amazon Echo — including the ability to run offline instead of piping everything you say to Big Brother.

According to Seeed, the freshly baked hardware will ship to backers in November 2016, and they do have a track-record of on-schedule shipped Kickstarter rewards. At the time of writing, some of the Crazy Early Birds are still available for $39. Enjoy the campaign video below and let us know what you think of think hardware in the comments!


Filed under: Crowd Funding, digital audio hacks

by Moritz Walter at August 25, 2016 03:31 PM

ardour

Ardour 5.1 released

Ardour 5.1 is primarily a bug fix release that corrects a number of issues (some notable, some minor) that were discovered after 5.0 had been released. There are a few new features and some improvements in the GUI, Lua and OSC support. Most users will want to upgrade to 5.1 as soon as possible.

Download  

Read the full list of changes below...

read more

by paul at August 25, 2016 01:08 PM

August 22, 2016

Scores of Beauty

Google Summer of Code 2016: cross-voice spanners

This summer, I’ve had the special opportunity to participate in the Google Summer of Code (GSoC) program with Lilypond. To describe GSoC briefly, students worldwide are paid a stipend by Google to work on coding projects for various open source organizations. My project deals with spanners, musical objects that start and end in different places, like slurs and crescendi. Specifically, I’ve been working on allowing users to create cross-voice spanners—spanners that start and end in different voices.

Suppose we want this crescendo to start in the first voice and end in the second voice:

<< { g\< a } \\ { f e d c\! } >>

Of course, this won’t work, since the start of the crescendo never knows about the end—which occurs in a different voice—and vice versa. The current solution to this problem is to end the crescendo in the same voice, using hidden notes:

<< { g\< a s s\! } \\ { f e d c } >>

Although this works, the code doesn’t express our intention well: we want to start a crescendo on the G and end it on the C. Moreover, this workaround can be unwieldy to use in more complicated situations. With the changes I’ve made, however, this result can be achieved with the following clearer code:

<< { g\=Staff.1\< a } \\ { f e d c\=Staff.1\! } >>

The key command here is \=, which sets the spanner id and share context of an event—in this case, the crescendo start event \< and stop event !. As of version 2.18, this command can already be used to write overlapping slurs. Now, we could also use \= just to write overlapping crescendi:

{ c\=1\< d\=2\< e f\=1\! g\=2\! }

In this case, only the spanner id needs to be set, which serves the purpose of indicating which start and stop events to match together. To understand what the “share context” Staff means, however, suppose the snippet from before belongs to a larger score:

<<
  \new Staff { << { g\=Staff.1\< a } \\ { f e d c\=Staff.1\! } \\ { a a\< a a\! } >> }
  \new Staff { << { a b\=Staff.2\< } \\ { g f e\=Staff.2\! d } >>
>>

Although both the first and second Staff have crescendo events, the events in each Staff are processed separately, because Lilypond is only matching start and stop events in the same Staff. In other words, the “share context” indicates the context within which Lilypond looks for matching start and stop events. So in this case, we could have used the same id in both Staffs, since each crescendo is limited to starting and ending in the same Staff.

I’ll briefly describe my general progression in developing this project. Throughout, I often asked questions and discussed ideas with the lilypond-devel mailing list, which was a great help.

  • First, I had to understand the overall process that transforms user input like \< and \! into the objects seen in the final output. This included learning Scheme (which fortunately didn’t take too long), as a lot of the relevant code is written in Scheme. From this, and with some clarifying answers from the the mailing list, I learned why the code limited spanners to starting and ending in the same voice.
  • Next, I brainstormed and tried some different approaches to get Lilypond to connect start and stop events in different voices as part of the same spanner. I chose to experiment with the dynamic engraver, which was simple enough for me to easily understand and try changing. With the help of the mailing list, I eventually settled on an approach that seemed promising. Essentially, when a spanner is started, it may be stored in a context above (the “share context” described earlier), allowing another voice to end it when the stop event is received.
  • Although this did work for dynamics, when I submitted my code to be reviewed, I realized that I would have had to write a lot of similar code for other spanners. To get the dynamic engraver to support cross-voice spanners, I also had to change it to manage multiple spanners at the same time (e.g. overlapping crescendi). However, a lot of the code for doing so resembled already-existing code for the same purpose in the slur engraver. It was therefore suggested that I separate out the code for this, so it wouldn’t be repeated in all engravers that support cross-voice spanners.
  • To implement this, instead of having one engraver in each voice be responsible for multiple spanners, multiple engravers are created that each handle one spanner. Basically, instead of having to rewrite an engraver to keep track of multiple spanners at the same time, I wrote code to make multiple copies of the engraver; since the engraver can already deal with one spanner, having multiple copies effectively allows for multiple spanners to be created simultaneously. The details took some time to work out, but having finished, the dynamic engraver can now create cross-voice spanners with just a few changes needed.

All the code I wrote for GSoC is found on my GitHub fork of Lilypond.

  • Any branch beginning with experimental contains unfinalized, often work-in-progress code that will not directly be included in Lilypond.
  • The gsoc-2016-spanners-old branch has my first attempt at cross-voice dynamics. Although (as mentioned above) this patch ended up requiring more refactoring, it contained a small change that was accepted as part of a different patch. This change has made it into Lilypond (12b68a3) and will be included in version 2.19.48, though not much will visibly change since only the internal representation of spanner id’s was altered.
  • Finally, the gsoc-2016-spanners branch has the code I completed at the end of GSoC; at the time of writing, it is not in Lilypond yet, but has been submitted for review.

I’ve learned a lot about both Lilypond and coding as I worked on this project. Although I was only able to make dynamics and slurs cross-voice during the timeframe—and to clarify, at the time of writing (2.19.47), this is not yet available in any release—I intend to continue contributing even after GSoC is over. Thank you Jan-Peter Voigt, for being my mentor, guiding me, and checking my code; Urs Liska, for helping me find a mentor and with the GSoC application; David Kastrup, for answering many of my questions and reviewing my code; everyone else who helped me on the mailing list; and the Lilypond community for creating Lilypond and providing me with this opportunity this summer.

Jeffery Shivers, another student working on Lilypond for GSoC, will also be posting his experience with GSoC soon. This will be updated with a link once it’s ready.

by Nathan Chou at August 22, 2016 06:43 PM

August 21, 2016

ardour

Ardour Pong

Ardour Pong from Robin Gareus on Vimeo.

A console classic for your console. Sample-accurate automation and all :)

read more

by paul at August 21, 2016 02:28 PM

August 19, 2016

GStreamer News

GStreamer Core, Plugins, RTSP Server, Editing Services, Python, Validate, VAAPI 1.8.3 stable release

The GStreamer team is pleased to announce the third bugfix release in the stable 1.8 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.8.x. For a full list of bugfixes see Bugzilla.

See /releases/1.8/ for the full release notes.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, or gstreamer-vaapi, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, or gstreamer-vaapi.

August 19, 2016 10:00 AM

August 18, 2016

OSM podcast

August 17, 2016

digital audio hacks – Hackaday

Bone Conduction Skull Radio

There are many ways to take an electrical audio signal and turn it into something you can hear. Moving coil speakers, plasma domes, electrostatic speakers, piezo horns, the list goes on. Last week at the Electromagnetic Field festival in the UK, we encountered another we hadn’t experienced directly before. Bite on a brass rod (sheathed in a drinking straw for hygiene), hear music.

The TOG Skull Radio demo boxThe TOG Skull Radio demo box

This was Skull Radio, a bone conduction speaker courtesy of [Tdr], one of our friends from TOG hackerspace in Dublin, and its simplicity hid a rather surprising performance. A small DC motor has its shaft connected to a piece of rod, and a small audio power amplifier drives the motor. Nothing is audible until you bite on the rod, and then you can hear the music. The bones of your skull are conducting it directly to your inner ear, without an airborne sound wave in sight.

The resulting experience is a sonic cathedral from lips of etherial sibilance, a wider soft palate soundstage broadened by a tongue of bass and masticated by a driving treble overlaid with a toothy resonance before spitting out a dynamic oral texture. You’ll go back to your hi-fi after listening to [Tdr]’s Skull Radio, but you’ll know you’ll never equal its unique sound.

(If you are not the kind of audiophile who spends $1000 on a USB cable, the last paragraph means you bite on it, you hear music, and it sounds not quite as bad as you might expect.)

This isn’t the first bone conduction project we’ve featured here, we’ve seen a Bluetooth speaker and at least one set of headphones, but our favorite is probably this covert radio.


Filed under: digital audio hacks, Hackerspaces

by Jenny List at August 17, 2016 08:01 AM

August 15, 2016

Pid Eins

Preliminary systemd.conf 2016 Schedule

A Preliminary systemd.conf 2016 Schedule is Now Available!

We have just published a first, preliminary version of the systemd.conf 2016 schedule. There is a small number of white slots in the schedule still, because we're missing confirmation from a small number of presenters. The missing talks will be added in as soon as they are confirmed.

The schedule consists of 5 workshops by high-profile speakers during the workshop day, 22 exciting talks during the main conference days, followed by one full day of hackfests.

Please sign up for the conference soon! Only a limited number of tickets are available, hence make sure to secure yours quickly before they run out! (Last year we sold out.) Please sign up here for the conference!

by Lennart Poettering at August 15, 2016 10:00 PM

August 12, 2016

Libre Music Production - Articles, Tutorials and News

New major release of Ardour, version 5 is released!

New major release of Ardour, version 5 is released!

The release of Ardour 5 has just been announced. This new release comes with some major new features and is a significant upgrade from the 4.x series. It is also the first release that officially supports the Windows platform. The following is a rundown of some of the latest features.

VCA faders

Ardour now includes VCA faders, a long requested feature.

by Conor at August 12, 2016 05:56 PM

ardour

Ardour 5.0 released

Ardour 5.0 is now available for Linux, OS X and Windows. This is a major release focused on substantial changes to the GUI and major new features related to mixing, plugin use, tempo maps, scripting and more. As usual, there are also hundreds of bug fixes. Ardour 5.0 can be parallel-installed with older versions of the program, and does not use the same preference files. It will load sessions from Ardour 2, 3 and 4, though with some potential minor changes.

Windows is now a supported platform

This is the first version of Ardour with official Windows support. Several years of products based on Ardour and our own nightly builds for Windows have made us confident that Ardour runs just as well on Windows as other platforms. We will not be providing support with:

  • installing Ardour on Windows
  • issues with audio hardware
  • system- or user-specific issues
These are out of scope for our developers and user community. However if you have issues actually doing stuff with Ardour, or there are generic problems affecting all Windows users, we will try to provide Windows users with the same kind of assistance that we do on Linux and OS X.

Read more below for the full list of changes ....

read more

by paul at August 12, 2016 10:23 AM

August 04, 2016

digital audio hacks – Hackaday

Keytar Made Out Of A Scanner To Make Even the 80s Jealous

Do any of you stay awake at night agonizing over how the keytar could get even cooler? The 80s are over, so we know none of us do. Yet here we are, [James Cochrane] has gone out and turned a HP ScanJet Keytar for no apparent reason other than he thought it’d be cool. Don’t bring the 80’s back [James], the world is still recovering from the last time.

Kidding aside (except for the part of not bringing the 80s back), the keytar build is simple, but pretty cool. [James] took an Arduino, a MIDI interface, and a stepper motor driver and integrated it into some of the scanner’s original features. The travel that used to run the optics back and forth now produce the sound; the case of the scanner provides the resonance. He uses a sensor to detect when he’s at the end of the scanner’s travel and it instantly reverses to avoid collision.

A off-the-shelf MIDI keyboard acts as the input for the instrument. As you can hear in the video after the break; it’s not the worst sounding instrument in this age of digital music. As a bonus, he has an additional tutorial on making any stepper motor a MIDI device at the end of the video.

If you don’t have an HP ScanJet lying around, but you are up to your ears in surplus Commodore 64s, we’ve got another build you should check out.


Filed under: Arduino Hacks, digital audio hacks, musical hacks

by Gerrit Coetzee at August 04, 2016 11:00 AM

July 27, 2016

Pid Eins

FINAL REMINDER! systemd.conf 2016 CfP Ends on Monday!

Please note that the systemd.conf 2016 Call for Participation ends on Monday, on Aug. 1st! Please send in your talk proposal by then! We’ve already got a good number of excellent submissions, but we are very interested in yours, too!

We are looking for talks on all facets of systemd: deployment, maintenance, administration, development. Regardless of whether you use it in the cloud, on embedded, on IoT, on the desktop, on mobile, in a container or on the server: we are interested in your submissions!

In addition to proposals for talks for the main conference, we are looking for proposals for workshop sessions held during our Workshop Day (the first day of the conference). The workshop format consists of a day of 2-3h training sessions, that may cover any systemd-related topic you'd like. We are both interested in submissions from the developer community as well as submissions from organizations making use of systemd! Introductory workshop sessions are particularly welcome, as the Workshop Day is intended to open up our conference to newcomers and people who aren't systemd gurus yet, but would like to become more fluent.

For further details on the submissions we are looking for and the CfP process, please consult the CfP page and submit your proposal using the provided form!

ALSO: Please sign up for the conference soon! Only a limited number of tickets are available, hence make sure to secure yours quickly before they run out! (Last year we sold out.) Please sign up here for the conference!

AND OF COURSE: We are also looking for more sponsors for systemd.conf! If you are working on systemd-related projects, or make use of it in your company, please consider becoming a sponsor of systemd.conf 2016! Without our sponsors we couldn't organize systemd.conf 2016!

Thank you very much, and see you in Berlin!

by Lennart Poettering at July 27, 2016 10:00 PM

July 26, 2016

OSM podcast

July 25, 2016

Libre Music Production - Articles, Tutorials and News

Autotuning & pitch correction with Zita-AT1 in Ardour

Autotuning & pitch correction with Zita-AT1 in Ardour

Be it for correcting those slightly out-of-tune notes from your singer, or going all the way to a Cher effect, an auto-tune plugin might come in handy. There’s not a lot of those designed for Linux, though choices do exist :

by Conor at July 25, 2016 08:20 AM

July 24, 2016

News – Ubuntu Studio

Ubuntu Studio 16.04.1 Released

A new point release of the Xenial Xerus LTS has been released. As usual, this point release includes many updates, and updated installation media has been provided so that fewer updates will need to be downloaded after installation. These include security updates and corrections for other high-impact bugs. Please see the 16.04.1 change summary for […]

by Set Hallstrom at July 24, 2016 09:55 AM

July 19, 2016

open-source – CDM Create Digital Music

Feel the beat on a Magic Trackpad or MacBook with free tool

Don’t like clicks or beeps or other sounds when using a metronome? Try some haptic feedback instead, with this free utility.

First, you’ll need an Apple trackpad that supports haptic feedback. Pretty soon, I suspect that will be all the new MacBooks – most of the line is badly in need of an update (another story there). For now, it’s the 2013 MacBook Pro, and so-called “New MacBook.”

Alternatively, you can use the Magic Trackpad 2. That’s perhaps the best option, because it’s wireless and you can position it anywhere you like – say, atop your keyboard or next to your Maschine.

Then, fire up this free utility, direct MIDI to the app, and you’ll feel as if someone is tapping you with the beat. No annoying sounds anywhere – perfect.

Since it listens to MIDI Clock, you can use any source, from Ableton Live (in turn synced to Ableton Link) to hardware (if it’s connected to your computer). It uses start/stop events to make sure it’s on the beat, then taps you on quarter notes.

The app is open source if anyone wants to check out the code. And you’ll find complete instructions. (Don’t download from the links at the top of the page; look at the beginning of the documentation for a ready-to-run app.)

https://github.com/faroit/magiclock

Genius.

magiclock

Next, Apple Watch? (Also with “Taptic Engine™” support.) There are some entries out there, like this one, though they seem to be slightly hampered by the current restrictions on apps from Apple. (I like my Pebble, too!)

The haptic feedback-specialized Basslet, upcoming after a Kickstarter campaign, might actually be the best bet – and I could see people who didn’t buy into the music listening application still buying it for this.

The post Feel the beat on a Magic Trackpad or MacBook with free tool appeared first on CDM Create Digital Music.

by Peter Kirn at July 19, 2016 01:55 PM

July 18, 2016

Pid Eins

REMINDER! systemd.conf 2016 CfP Ends in Two Weeks!

Please note that the systemd.conf 2016 Call for Participation ends in less than two weeks, on Aug. 1st! Please send in your talk proposal by then! We’ve already got a good number of excellent submissions, but we are interested in yours even more!

We are looking for talks on all facets of systemd: deployment, maintenance, administration, development. Regardless of whether you use it in the cloud, on embedded, on IoT, on the desktop, on mobile, in a container or on the server: we are interested in your submissions!

In addition to proposals for talks for the main conference, we are looking for proposals for workshop sessions held during our Workshop Day (the first day of the conference). The workshop format consists of a day of 2-3h training sessions, that may cover any systemd-related topic you'd like. We are both interested in submissions from the developer community as well as submissions from organizations making use of systemd! Introductory workshop sessions are particularly welcome, as the Workshop Day is intended to open up our conference to newcomers and people who aren't systemd gurus yet, but would like to become more fluent.

For further details on the submissions we are looking for and the CfP process, please consult the CfP page and submit your proposal using the provided form!

And keep in mind:

REMINDER: Please sign up for the conference soon! Only a limited number of tickets are available, hence make sure to secure yours quickly before they run out! (Last year we sold out.) Please sign up here for the conference!

AND OF COURSE: We are also looking for more sponsors for systemd.conf! If you are working on systemd-related projects, or make use of it in your company, please consider becoming a sponsor of systemd.conf 2016! Without our sponsors we couldn't organize systemd.conf 2016!

Thank you very much, and see you in Berlin!

by Lennart Poettering at July 18, 2016 10:00 PM

July 16, 2016

digital audio hacks – Hackaday

Hacklet 116 – Audio Projects

If the first circuit a hacker builds is an LED blinker, the second one has to be a noisemaker of some sort. From simple buzzers to the fabled Atari punk console, and guitar effects to digitizing circuits, hackers, makers and engineers have been building incredible audio projects for decades. This week the Hacklet covers some of the best audio projects on Hackaday.io!

vumeterWe start with [K.C. Lee] and Automatic audio source switching. Two audio sources, one amplifier and speaker system; this is the problem [K.C. Lee] is facing. He listens to audio from his computer and TV, but doesn’t need to have both connected at the same time. Currently he’s using a DPDT switch to change inputs. Rather than manually flip the switch, [K.C. Lee] created this project to automatically swap sources for him. He’s using an STM32F030F4 ARM processor as the brains of the operation. The ADCs on the microcontroller monitor both sources and pick the currently active one. With all that processing power, and a Nokia LCD as an output, it would be a crime to not add some cool features. The source switcher also displays a spectrum analyzer, a VU meter, date, and time. It even will attenuate loud sources like webpages that start blasting audio.

 

muzzNext up is [Adam Vadala-Roth] with Audio Blox: Experiments in Analog Audio Design. [Adam] has 32 projects and counting up on Hackaday.io. His interests cover everything from LEDs to 3D printing to solar to hydroponics. Audio Blox is a project he uses as his engineer’s notebook for analog audio projects. It is a great way to view a hacker figuring out what works and what doesn’t. His current project is a 4 board modular version of the Big Muff Pi guitar pedal. He’s broken this classic guitar effect down to an input board, a clipping board, a tone control, and an output stage. His PCB layouts, schematics, and explanations are always a treat to view and read!

pauldioNext we have [Paul Stoffregen] with Teensy Audio Library. For those not in the know, [Paul] is the creator of the Teensy family of boards, which started as an Arduino on steroids, and has morphed into something even more powerful. This project documents the audio library [Paul] created for the Freescale/NXP ARM processor which powers the Teensy 3.1. Multiple audio files playing at once, delays, and effects, are just a few things this library can do. If you’re new to the audio library, definitely check out [Paul’s] companion project
Microcontroller Audio Workshop & HaD Supercon 2015. This project is an online version of the workshop [Paul] ran at the 2015 Hackaday Supercon in San Francisco.

drdacFinally we have [drewrisinger] with DrDAC USB Audio DAC. DrDac is a high quality DAC board which provides a USB powered audio output for any PC. Computers these days are built down to a price. This means that lower quality audio components are often used. Couple this with the fact that computers are an electrically noisy place, and you get less than stellar audio. Good enough for the masses, but not quite up to par if you want to listen to studio quality audio. DrDAC houses a PCM2706 audio DAC and quality support components in a 3D printed case. DrDAC was inspired by [cobaltmute’s] pupDAC.

If you want to see more audio projects and hacks, check out our new audio projects list. See a project I might have missed? Don’t be shy, just drop me a message on Hackaday.io. That’s it for this week’s Hacklet, As always, see you next week. Same hack time, same hack channel, bringing you the best of Hackaday.io!


Filed under: digital audio hacks, Hackaday Columns

by Adam Fabio at July 16, 2016 05:01 PM

July 15, 2016

digital audio hacks – Hackaday

Baby Monitor Rebuild is also ESP8266 Audio Streaming How-To

[Sven337]’s rebuild of a cheap and terrible baby monitor isn’t super visual, but it has so much more going on than it first seems. It’s also a how-to for streaming audio via UDP over WiFi with a pair of ESP8266 units, and includes a frank sharing of things that went wrong in the process and how they were addressed. [Sven337] even experimented with a couple of different methods for real-time compression of the transmitted audio data, for no other reason than the sake of doing things as well as they can reasonably be done without adding parts or spending extra money.

receiverThe original baby monitor had audio and video but was utterly useless for a number of reasons (French).  The range and quality were terrible, and the audio was full of static and interference that was just as loud as anything the microphone actually picked up from the room. The user is left with two choices: either have white noise constantly coming through the receiver, or be unable to hear your child because you turned the volume down to get rid of the constant static. Our favorite part is the VOX “feature”: if the baby is quiet, it turns off the receiver’s screen; it has no effect whatsoever on the audio! As icing on the cake, the analog 2.4GHz transmitter interferes with the household WiFi when it transmits – which is all the time, because it’s always-on.

Small wonder [Sven337] decided to go the DIY route. Instead of getting dumped in the trash, the unit got rebuilt almost from the ground-up.

inside_full_2Re-using the enclosures meant that the DIY rebuild was something that looked as good as it worked. After all, [Sven337] didn’t want a duct-taped hack job in the nursery. But don’t let the ugly mess inside the enclosure fool you – there is a lot of detail work in this build. The inside may be a mess of wires and breakout boards, but it’s often a challenge to work within the space constraints of fitting a project into some other device’s enclosure.

The ESP8266 works but is not a completely natural fit for an audio baby monitor, as it lacks a quality ADC and DAC. But on the other hand it is cheap, it is easy to use, and it has plenty of processing power. These attributes are the reason the ESP8266 has made its way into so many projects, including household gadgets like this WiFi webcam.


Filed under: digital audio hacks, how-to

by Donald Papp at July 15, 2016 11:00 PM

July 14, 2016

Libre Music Production - Articles, Tutorials and News

LMP Asks #20: An interview with Marius Stärk

LMP Asks #20: An interview with Marius Stärk

This month LMP Asks talks to Marius Stärk, Linux enthusiast and musician who produces all his music with FLOSS tools.

Hi Marius, thank you for taking the time to do this interview. Where do you live, and what do you do for a living?

My name is Marius Stärk, I'm 28 years old and I live in the city of Aachen, a medium-sized city at Germany's western border, adjacent to Belgium and the Netherlands.

by Conor at July 14, 2016 03:02 PM

July 12, 2016

GStreamer News

GStreamer Core, Plugins, RTSP Server, Editing Services, Python, Validate 1.9.1 unstable release (binaries)

Pre-built binary images of the 1.9.1 unstable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

July 12, 2016 12:00 AM

July 07, 2016

News – Ubuntu Studio

Backports, the benefits and the consequences.

Ubuntu Studio is happy to announce that backports are going to be rolling out soon and the first one will be Ardour. Backports are newer versions of applications, ported back to stable versions of the system. For example in the case of Ardour, Ubuntu Studio users running 14.04 or 16.04 will be able to have […]

by Set Hallstrom at July 07, 2016 10:11 AM

Libre Music Production - Articles, Tutorials and News

July 2016 Newsletter out now - Interviews, News and more

Our newsletter for July is now sent to our subscribers. If you have not yet subscribed, you can do so from our start page.

You can also read the latest issue online. In it you will find:

  • 3 new 'LMP Asks' interviews
  • News
  • New software release announcements

and more!

by admin at July 07, 2016 12:27 AM

July 06, 2016

GStreamer News

GStreamer Core, Plugins, RTSP Server, Editing Services, Python, Validate, VAAPI, OMX 1.9.1 unstable release

The GStreamer team is pleased to announce the first release of the unstable 1.9 release series. The 1.9 release series is adding new features on top of the 1.0, 1.2, 1.4, 1.6 and 1.8 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework. The unstable 1.9 release series will lead to the stable 1.10 release series in the next weeks. Any newly added API can still change until that point.

Binaries for Android, iOS, Mac OS X and Windows will be provided in the next days.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

July 06, 2016 12:00 PM

July 01, 2016

digital audio hacks – Hackaday

1024 “Pixel” Sound Camera Treats Eyes to Real-Time Audio

A few years ago, [Artem] learned about ways to focus sound in an issue of Popular Mechanics. If sound can be focused, he reasoned, it could be focused onto a plane of microphones. Get enough microphones, and you have a ‘sound camera’, with each microphone a single pixel.

Movies and TV shows about comic books are now the height of culture, so a device using an array of microphones to produce an image isn’t an interesting demonstration of FFT, signal processing, and high-speed electronic design. It’s a Daredevil camera, and it’s one of the greatest builds we’ve ever seen.

[Artem]’s build log isn’t a step-by-step process on how to make a sound camera. Instead, he went through the entire process of building this array of microphones, and like all amazing builds the first step never works. The first prototype was based on a flatbed scanner camera, simply a flatbed scanner in a lightproof box with a pinhole. The idea was, by scanning a microphone back and forth, using the pinhole as a ‘lens’, [Artem] could detect where a sound was coming from. He pulled out his scanner, a signal generator, and ran the experiment. It didn’t work. The box was not soundproof, the inner chamber should have been anechoic, and even if it worked, this camera would only be able to produce an image or two a minute.

back8×8 microphone array (mics on opposite side) connected to Altera FPGA at the center

The idea sat in the shelf of [Artem]’s mind for a while, and along the way he learned about FFT and how the gigantic Duga over the horizon radar actually worked. Math was the answer, and by using FFT to transform a microphones signals from up-and-down to buckets of frequency and intensity, he could build this camera.

That was the theory, anyway. Practicality has a way of getting in the way, and to build this gigantic sound camera he would need dozens of microphones, dozens of amplifiers, and a controller with enough analog pins, DACs, and processing power to make sense of all of this.

This complexity collapsed when [Artem] realized there was an off-the-shelf part that was a perfect microphone camera pixel. MEMS microphones, like the kind found in smartphones, take analog sound and turn it into a digital signal. Feed this into a fast enough microcontroller, and you can perform FFT on the signal and repeat the same process on the next pixel. This was the answer, and the only thing left to do was to build a board with an array of microphones.

4x4[Artem]’s camera microphone is constructed out of several modules, each of them consisting of an 8×8 array of MEMS microphones, controlled via FPGA. These individual modules can be chained together, and the ‘big build’ is a 32×32 array. After a few problems with manufacturing, the board actually worked. He was recording 64 channels of audio from a single panel. Turning on the FFT visualization and pointing it at a speaker revealed that yes, he had indeed made a sound camera.
The result is a terribly crude movie with blobs of color, but that’s the reality of a camera that only has 32×32 resolution. Right now the sound camera works, the images are crude, and [Artem] has a few ideas of where to go next. A cheap PC is fast enough to record and process all the data, but now it’s an issue of bandwidth; 30 sounds per second is a total of 64 Mbps of data. That’s doable, but it would need another FPGA implementation.

Is this sonic vision? Yes, technically the board works. No, in that the project is stalled, and it’s expensive by any electronic hobbyist standards. Still, it’s one of the best to grace our front page.

[Thanks zakqwy for the tip!]


Filed under: digital audio hacks, FPGA, slider

by Brian Benchoff at July 01, 2016 08:01 AM

June 23, 2016

OSM podcast

rncbc.org

Qtractor 0.7.8 - The Snobby Graviton is out!


So it's first solstice'16...

The world sure is a harsh mistress... yeah, you read that right! Heinlein's Moon have been just intentionally rephrased. Yeah, whatever.

Just about when the UK vs. EU is there under close scrutiny and sizzling winds of trumpeting (pun intended, again) coming from the other side of the pond, we all should mark the days we're living in.

No worries: we still have some feeble but comforting news:

Qtractor 0.7.8 (snobby graviton) is out!

Nevertheless ;)

Qtractor is an audio/MIDI multi-track sequencer application written in C++ with the Qt framework. Target platform is Linux, where the Jack Audio Connection Kit (JACK) for audio and the Advanced Linux Sound Architecture (ALSA) for MIDI are the main infrastructures to evolve as a fairly-featured Linux desktop audio workstation GUI, specially dedicated to the personal home-studio.

Change-log:

  • MIDI file track names (and any other SMF META events) are now converted to and from the base ASCII/Latin-1 encoding, as much to prevent invalid SMF whenever non-Latin-1 UTF-8 encoded MIDI track names are given.
  • MIDI file tempo-map and location markers import/export is now hopefully corrected, after almost a decade in mistake, regarding MIDI resolution conversion, when different than current session's setting (TPQN, ticks-per-quarter-note aka. ticks-per-beat, etc.)
  • Introducing LV2 UI Show interface support for other types than Qt, Gtk, X11 and lv2_external_ui.
  • Prevent any visual updates while exporting (freewheeling) audio tracks that have at least one plugin activate state automation enabled for playback (as much for not showing messages like "QObject::connect: Cannot queue arguments of type 'QVector'"... anymore).
  • The common buses management dialog (View/Buses...) sees the superfluous Refresh button finally removed, while two new button commands take its place: (move) Up and Down.
  • LV2 plug-in Patch support has been added and LV2 plug-ins parameter properties manipulation is now accessible on the generic plug-in properties dialog.
  • Fixed a recently introduced bug, that rendered all but one plug-in instance to silence, affecting only DSSI plug-ins which implement DSSI_Descriptor::run_multiple_synths() eg. fluidsynth-dssi, hexter, etc.

Website:

http://qtractor.sourceforge.net

Project page:

http://sourceforge.net/projects/qtractor

Downloads:

http://sourceforge.net/projects/qtractor/files

Git repos:

http://git.code.sf.net/p/qtractor/code
https://github.com/rncbc/qtractor.git
https://gitlab.com/rncbc/qtractor.git
https://bitbucket.org/rncbc/qtractor.git

Wiki (on going, help stillwanted, always!):

http://sourceforge.net/p/qtractor/wiki/

License:

Qtractor is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Flattr this

 

Enjoy && Have (lots of) fun.

by rncbc at June 23, 2016 06:00 PM

Nothing Special

Room Treatment and Open Source Room Evaluation

Its hard to improve something you can't measure.

My studio space is much much too reverberant. This is not surprising since its a basement room with laminate flooring and virtually no soft, absorbant surfaces at all. I planned to add acoustic treatment from the get go, but funding made me wait until now. I've been recording doing DI guitars, drum samples, and synth programming, but nothing acoustic yet until the room gets tamed a little bit.



(note: I get pretty explanatory about why bass traps matter in the next several paragraphs. If you only care about the measurement stuff, skip to below the pictures.)

Well, how do we know what needs taming? First there are some rules of thumb. My room is about 13'x11'x7.5' which isn't an especially large space. This means that sound waves bouncing off the walls will have some strong resonances at 13', 11', and 7.5' wavelengths which equates to about 86Hz, 100Hz, and 150Hz respectively. There will be many more resonances, but these will be the strongest ones. These will become standing waves where the walls just bounce the acoustic energy back and forth and back and forth and back and forth... Not forever, but longer than the other frequencies in my music.

For my room, these are very much in the audible spectrum so this acoustic energy hanging around in the room will be covering other stuff I want to hear (for a few hundred extra ms) while mixing. In addition to these primary modes there will also be resonances at 2x, 3x, 4x, etc. of these frequencies. Typically the low end is where it tends to get harder to hear what's going on, but all the reflections add up to the total reverberance which is currently a bit too much for my recording.

Remember acoustic waves are switching (or waving even) between high pressure/low speed and low pressure/high speed. Where the high points lie depends on the wavelength (and the location of the sound source). At the boundaries of the room, the air carrying the primary modes' waves (theoretically) doesn't move at all. That means the pressure is the highest there. At the very middle of the room you have a point where air carrying these waves is moving the fastest. Of course the air is usually carrying lots of waves at the same time so how its moving/pressurized in the room is hard to predict exactly.

With large wavelengths like the ones we're most worried about, you aren't going to stop them with a 1" thick piece of foam hung on the wall (no matter how expensive it was). You need a longer space to act on the wave and trap more energy. With small rooms more or less the only option is through porous absorbers which basically take acoustic energy out of the room when air carrying the waves tries to move through the material of the treatment. Right against the wall air is not moving at all, so putting material there isn't going to be very effective for the standing waves. And only 1" of material isn't going to act on very much air. So you need volume of material and you need to put it in the right place.

Basically thicker is better to stop these low waves.  If you have sufficient space in your room put a floor-to-ceiling 6' deep bass trap. But most of us don't have that kind of space to give up. The thicker the panel the less dense of material you should use. Thick traps will also stop higher frequencies, so basically, just focus on the low stuff and the higher will be fine. Often if the trap is not in a direct reflecting point from the speaker then its advised to glue kraft paper to the material which bounces some of the ambient high end around the room so its not too dead. How dead is too dead? How much high end does each one bounce? I don't know. It's just a rule of thumb. The rule for depth is quarter wavelength. An 11' wave really will be stopped well by a 2.75' thick trap. This thickness guarantees that there will be some air moving somewhere through the trap even if you put it right in the null. Do you have a couple extra feet of space to give up all around the room? Me neither. But we'll come back to that. Also note that more surface area is more important than thickness. Once you've covered enough wall/floor/ceiling, then the next priority is thickness.

Next principle is placement. You can place treatment wherever you want in the room but some places are better than others. Right against the wall is ok because air is moving right up until the wall, but it will be better if there is a little gap, because the air is moving faster a little further from the wall. So we come back to the quarter wavelength rule. The most effective placement of a panel is spaced equal to its thickness. So a 3" panel is best 3" away from the wall. This effectively doubles the thickness of your panel. Thus we see placement and thickness are related. Now your 3" panel is acting like its 6" damping pretty effectively down to 24" waves (~563Hz). It also works well on all shorter waves. Bass traps are really broadband absorbers. But... 563Hz is a depressingly high frequency when we're worried about 80Hz. This trap will do SOMETHING to even 40Hz waves, but not a whole lot. What do we do if our 13' room mode is causing a really strong resonance?

You can move your trap further into the room. This makes it so there is a gap in the absorption curve, but it makes the absorption go lower. So move the 3" panel to have a 6" gap and you won't be as effective at absorbing 563Hz but now it works much better on 375Hz. You are creating a tuned trap. It still works some on 563Hz but the absorption curve will have a low point then a bump at 375. Angling the trap so the gap varies can help smooth this response making it absorb more frequencies, but less effectively for specific ones. So tradeoff smooth curve for really absorbing a lot of energy at a specific frequency if you need.

The numbers here are pretty thoretical. Even though the trap is tuned to a certain frequency a lot of other frequencies will get absorbed. Some waves will enter at angles which makes it seem thicker. Some waves will bounce off. Some waves will diffract (bend) around the trap somewhat. There are so many variables that its very difficult to predict acoustics precisely. But these rules of thumb are applicable in most cases.

Final thing to discuss is what material? Its best to find one that has been tested with published numbers because you have a good idea if and how it will work. Mineral wool is a fibrous material that resists air passing through. Fiberglass insulation can work too. Rigid fiberglass Owens Corning 703 is the standard choice but mineral wool is cheaper and just as effective so its becoming more popular. Both materials (and there are others) come in various densities, and the idea comes into play that thicker means less dense. This is because if it's too dense acoustic waves could bounce back out on their way through rather than be absorbed.

Man. I didn't set out to give a lecture on acoustics, but its there and I'm not deleting it. I do put the bla in blog, remember? There's a lot more (and better) reading you can do at an acoustic expert's site.

For me and my room (and my budget) I started out building two 9" deep 23" wide floor to ceiling traps for the two corners I have access to (The other 2 corners are blocked by the door and my wife's sewing table). These will be stuffed with Roxul Safe and Sound (SnS) which is a lower density mineral wool. Its available on Lowes online, but it was cheaper to find a local supplier to special order it for me.


Roxul compresses it in the packaging nicely

I will build a 6"x23" panel using whatever's left and will place it behind the listening position. I also ordered a bag of the denser Roxul Rockboard 60 (RB60). I'm still waiting for it to come in (rare stuff to find in little Logan UT, but I found a supplier kind enough to order it and let me piggy back on their shipping container so I'm not paying any shipping, thanks Building Specialties!). I will also build four 4"x24"x48" panels out of Roxul Rockboard 60 (when it finally arrives) which is a density that more or less matches the performance of OC703.  These will be hung on the walls at the first reflecting points and ceiling corners. Next year or so when I have some more money I plan to buy a second bag of the rockboard which will hopefully be enough treatment to feel pretty well done. I considered using the 2" RB60 panels individually so I can cover more surface (which is the better thing acoustically), but in the end I want 4" panels and I don't know if it will be feasible to rebuild these later to add thickness.
my stack of flashing

I more or less followed Steven Helm's method with some variations. The stuff he used isn't very available so I bought some 20 gauge 1.5" galvanized L-framing or angle flashing from the same local supply shop who got me . They had 25ga. but I was worried it would be too flimsy, considering even on the rack a lot of it got bent. I just keep envisioning my kids leaning against them or something and putting a big dent on the side. After buying I worried it would be too heavy, but now after the build, I think for my towering 7.5' bass traps, the thicker material was a good choice. For the smaller 2'x4' panels that are going to be hung up, I'm not sure yet.

I chose not to do a wood trap because I thought riveting would be much faster than nailing where I don't have a compressor yet. Unfortunately I didn't forsee how long it can take to drill through 20ga steel. I found after the first trap its much faster to punch a hole with a nail then drill it to the rivet size. Its nice when you have something to push against (a board underneath) but where I was limited on workspace I sometimes had to drill sideways. A set of vice-grip pliers really made that much easier.


Steven's advice about keeping it square is very good, something I didn't do the best at on the first trap, but not too far off either. They key is using the square to keep your snips cutting squarely. Also since my frame is so thick it doesn't bend very tightly, so I found it useful to take some pliers and twist the corner a bit to square it up.
Corner is a bit round

a bit tighter corner now
 Since my traps are taller than as single SnS panel I had to stack them and cut a 6" off the top. A serrated knife works best for cutting this stuff but I didn't have an old one around, so I improvised one from some scrap sheet metal.

 I staggered the seams to try to make a more homogenous material.


With all the interior assembled I think the frames actually look good enough you could keep them on the outside, but my wife preferred the whole thing be wrapped in fabric. I don't care either way.


Before covering though I glued on some kraft paper using spray adhesive. I worked from top to bottom, but some of them got a bit wrinkled.




The paper was a bit wider than the frame, so I cut around the frame and stuffed it behind a bit, so it has a tidier look.





I'd say they look pretty darn good even without fabric!




Anyway, so all that acoustic blabber above boils down to the fact that even following rules of thumb, the best thing to do is measure the room before and after treatment to see what needs to be treated and how well your treatment did. If its good leave it, if its bad you can add more or try to move it around to address where its performing poorly.

So as measuring is important, and I'm kinda a stickler for open source software I will show you today how to do it. The de-facto standard for measurement is the Room Eq Wizard (REW) freeware program. Its free but not libre, so I decided to use what was libre. Full disclosure: I installed REW and tried it but couldn't ever get sound to come out of it, so that helped motivate the switch. I was impressed REW had a linux installer, but I couldn't find any answers on getting sound out. Its java based, not JACK capable, so it couldn't talk to my firewire soundcard. REW is very good, but for the freedom idealists out there we can use Aliki.

The method is the same in both, generate a sweep of sine tones with your speakers, record the room's response with your mic, and do some processing that creates an impulse response for your room. An impulse signal is a broadband signal that contains all frequencies equally for a very very (infinitely short) amount of time. True impulses are difficult to generate so its easier to just send the frequencies one at a time then combine them with some math. I've talked a little about measuring impulse responses before. The program I used back then (qloud) isn't compiling easily for me these days because it hasn't been updated for modern QT libraries and Aliki is more tuned for room measurement vs. loudspeaker measurement.

I am most interested in 2 impulse responses: 1. the room response between my monitors and my ears while mixing, and 2. the room response between my instruments and the mic. Unfortunately I can't take my monitors or my mic out of the measurement because I don't have anything else to generate or record the sine sweeps with. So each measurement will have these parts of my signal chain's frequency response convolved in too, but I think they are flat enough to get an idea and they'll be consistent for before and after treatment comparisons. I don't have a planned position for where I will be recording in this room but the listening position won't be moving so I'm focused on response 1.

The Aliki manual linked above is pretty good. For the most part I'm not going to rehearse it here. You make select a project location, and I found that anywhere but your home directory didn't work. It makes 4 folders in that location to store different audio files: sweep, capture, impulse, and edited files.

We must first make a sweep, so click the sweep button. I'm going from 20Hz to 22000Hz. May as well see the full range, no? A longer sweep can actually reduce the noise of the measurement, so I went a full 15 seconds. This generates an audio file with the sweep in it in the sweep folder. Aliki stores everything as .ald files, basically a wav with a simpler header I think.

Next step: capture. Set up your audio input and output ports, and pick your sweep file for it to play. Use the test to get your levels. I found that even with my preamps cranked the levels were low coming in from my mic. It was night so I didn't want to play it much louder. You can edit the captures if you need. Each capture makes a new file or files in the capture directory.

I did this over several days because I measured before treatment, then with the traps in place before the paper was added and again after the paper was glued on. Use the load function to get your files and it will show them in the main window. Since my levels were low I went ahead and misused the edit functions to add gain to the capture files so they were somewhat near full swing.

Next step is the convolution to remove the sweep and calculate the impulse response. Select the sweep file you used, set the end time to be longer than your sweep was and click apply and it should give you the impulse response. Be aware that if your levels are low like mine were, you'll only get the tiniest blip of waveform near zero. Save that as a new file and then go to edit.

In edit, you'll likely need to adjust the gain, but you can also adjust the length, and in the end you have a lovely impulse response that you can export to a .wav file that you can listen to (though its not much to listen to) or more practically: use in your favorite impulse response like IR or klangfalter.

But we don't want to use this impulse for convolving signals with. We can already get that reverb by just playing an instrument in our room! We want to analyze the impulse response to see if there's improvement or if something still needs to be changed. So this is where I imported the IR wav files into GNU Octave.

I wrote a few scripts to help out, namely: plotIREQ and plotIRwaterfall. They can be found in their git repository. I also made fftdecimate which smooths it out from the raw plotIREQ plot:



to this:

I won't go through the code in too much detail. If you'd like me to, leave a comment and I'll do another post. But look at plotMyIRs.m for useage examples of how I generated these plots.


You can see the big bump from around 150hz to 2khz. And a couple big valleys at 75hz, 90hz, 110hz etc. One thing I decided from looking at these is that the subwoofer should be turned up a bit, since my Blue Sky Exo2's crossover at around 150hz, and everything below that measured rather low.

I was hoping for a smoother result, especially in the low end, but I plan to build more broadband absorbers for the first reflection points. While a 4" thick panel doesn't target the really low end like these bass traps, they do have some effect, even on the very low frequencies. So I hope they'll have a cumulative effect down on that lower part of the graph.


The other point that I'd like to comment on is that the paper didn't seem to make much of a difference. Its possible that since it wasn't factory glued onto the rockwool it lacks a sufficient bond to transfer the energy properly. It doesn't seem to hurt the results too much either, in fact around 90hz it seems like it actually makes the response smoother, so I don't plan to remove it (yet at least).

The last plots I want to look at is the waterfall plots. These show how the frequencies are responding in time so you will see if any frequencies are ringing/resonating and need better treatment.


Here we see some anomolies. Just comparing the first and final plots, its easy to see that nearly every frequency decays much more quickly (we're focused on the lower region 400hz and below, since thats where the rooms primary modes lie). You also see a long resonance somewhere around 110hz that still isn't addressed, which is probably the next target. I can try to move the current traps out from the wall and see if that helps, or make a new panel and try to tune it.

Really though I'm probably going to wait until I've built the next set of panels.
Hope this was informative and useful. Try out those octave scripts. And please comment!

by Spencer (noreply@blogger.com) at June 23, 2016 03:10 PM

June 20, 2016

open-source – CDM Create Digital Music

A composition you can only hear by moving your head

“It’s almost like there’s an echo of the original music in the space.”

After years of music being centered on stereo space and fixed timelines, sound seems ripe for reimagination as open and relative. Tim Murray-Browne sends us a fascinating idea for how to do that, in a composition in sound that transforms as you change your point of view.

Anamorphic Composition (No. 1) is a work that uses head and eye tracking so that you explore the piece by shifting your gaze and craning your neck. That makes for a different sort of composition – one in which time is erased, and fragments of sound are placed in space.

Here’s a simple intro video:

Anamorphic Composition (No. 1) from Tim Murray-Browne on Vimeo.

I was also unfamiliar with the word “anamorphosis”:

Anamorphosis is a form which appears distorted or jumbled until viewed from a precise angle. Sometimes in the chaos of information arriving at our senses, there can be a similar moment of clarity, a brief glimpse suggestive of a perspective where the pieces align.

Tech details:

The head tracking and most of the 3D is done in Cinder using the Kinect One. This pipes OSC into SuperCollider which does the sounds synthesis. It’s pretty much entirely additive synthesis based around the harmonics of a bell.

I’d love to see experiments with this via acoustically spatialized sound, too (not just virtual tracking). Indeed, this question came up in a discussion we hosted in Berlin in April, as one audience member talked about how his perception of a composition changed as he tilted his head. I had a similar experience taking in the work of Tristan Perich at Sónar Festival this weekend (more on that later).

On the other hand, virtual spaces will present still other possibilities – as well as approaches that would bend the “real.” With the rise of VR experiences in technology, the question of point of view in sound will become as important as point of view in image. So this is the right time to ask this question, surely.

Something is lost on the Internet, so if you’re in London, check out the exhibition in person. It opens on the 27th:

http://timmb.com/anamorphic-composition-no-1/

The post A composition you can only hear by moving your head appeared first on CDM Create Digital Music.

by Peter Kirn at June 20, 2016 04:30 PM

Libre Music Production - Articles, Tutorials and News

LMP Asks #19: An interview with Vladimir Sadovnikov

LMP Asks #19: An interview with Vladimir Sadovnikov

This month LMP Asks talks to Vladimir Sadovnikov, programmer and sound engineer, about his project, LSP plugins, which aims to bring new, non existing plugins to Linux. As well as the LSP plugin suite, Vladimir has also contributed to other Linux audio projects such as Calf Studio Gear and Hydrogen.

by Conor at June 20, 2016 12:49 PM

June 18, 2016

Libre Music Production - Articles, Tutorials and News

Check out 'Why, Phil?', new Linux audio webshow series

Check out 'Why, Phil?', new Linux audio webshow series

Philip Yassin has recently started an upbeat Linux audio webshow series called 'Ask Phil?'. Only recently started, the series has already notched up an impressive 7 episodes, most of which revolve around Phil's favourite DAW, Qtractor.

by Conor at June 18, 2016 06:45 PM

The "Gang of 3" is loose again

The

The Vee One Suite aka. the gang of three old-school homebrew software instruments, respectively synthv1, as a polyphonic subtractive synthesizer, samplv1, a polyphonic sampler synthesizer and drumkv1 as one another drum-kit sampler, are here released once again, now in their tenth reincarnation.

by yassinphilip at June 18, 2016 03:25 PM

June 17, 2016

GStreamer News

GStreamer Core, Plugins, RTSP Server, Editing Services, Validate 1.8.2 stable release (binaries)

Pre-built binary images of the 1.8.2 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

June 17, 2016 02:00 PM

June 16, 2016

rncbc.org

Vee One Suite 0.7.5 - The Tenth beta is out!


Hiya!

The Vee One Suite aka. the gang of three old-school homebrew software instruments, respectively synthv1, as a polyphonic subtractive synthesizer, samplv1, a polyphonic sampler synthesizer and drumkv1 as one another drum-kit sampler, are here released once again, now in their tenth reincarnation.

All available in dual form:

  • a pure stand-alone JACK client with JACK-session, NSM (Non Session management) and both JACK MIDI and ALSA MIDI input support;
  • a LV2 instrument plug-in.

The esoteric change-log goes like this:

  • LV2 Patch property parameters and Worker/Schedule support are now finally in place, allowing for sample file path selections from generic user interfaces (applies to samplv1 and drumkv1 only).
  • All changes to most continuous parameter values are now smoothed to a fast but finite slew rate.
  • All BPM sync options to current transport (Auto) have been refactored to new special minimum value (which is now zero).
  • In compliance to the LV2 spec. MIDI Controllers now affect cached parameter values only, via shadow ports, instead of input control ports directly, mitigating their read-only restriction.
  • Make sure LV2 plug-in state is properly reset on restore.
  • Dropped the --enable-qt5 from configure as found redundant given that's the build default anyway (suggestion by Guido Scholz, while for Qtractor, thanks).

The Vee One Suite are free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

And then again!

synthv1 - an old-school polyphonic synthesizer

synthv1 0.7.5 (tenth official beta) is out!

synthv1 is an old-school all-digital 4-oscillator subtractive polyphonic synthesizer with stereo fx.

LV2 URI: http://synthv1.sourceforge.net/lv2

website:
http://synthv1.sourceforge.net

downloads:
http://sourceforge.net/projects/synthv1/files

git repos:
http://git.code.sf.net/p/synthv1/code
https://github.com/rncbc/synthv1.git
https://gitlab.com/rncbc/synthv1.git
https://bitbucket.org/rncbc/synthv1.git

Flattr this

samplv1 - an old-school polyphonic sampler

samplv1 0.7.5 (tenth official beta) is out!

samplv1 is an old-school polyphonic sampler synthesizer with stereo fx.

LV2 URI: http://samplv1.sourceforge.net/lv2

website:
http://samplv1.sourceforge.net

downloads:
http://sourceforge.net/projects/samplv1/files

git repos:
http://git.code.sf.net/p/samplv1/code
https://github.com/rncbc/samplv1.git
https://gitlab.com/rncbc/samplv1.git
https://bitbucket.org/rncbc/samplv1.git

Flattr this

drumkv1 - an old-school drum-kit sampler

drumkv1 0.7.5 (tenth official beta) is out!

drumkv1 is an old-school drum-kit sampler synthesizer with stereo fx.

LV2 URI: http://drumkv1.sourceforge.net/lv2

website:
http://drumkv1.sourceforge.net

downloads:
http://sourceforge.net/projects/drumkv1/files

git repos:
http://git.code.sf.net/p/drumkv1/code
https://github.com/rncbc/drumkv1.git
https://gitlab.com/rncbc/drumkv1.git
https://bitbucket.org/rncbc/drumkv1.git

Flattr this

Enjoy && have lots of fun ;)

by rncbc at June 16, 2016 05:30 PM

June 15, 2016

Libre Music Production - Articles, Tutorials and News

LMP Asks #18: Andrew Lambert & Neil Cosgrove

LMP Asks #18: Andrew Lambert & Neil Cosgrove

This month we interviewed Andrew Lambert and Neil Cosgrove, members of Lorenz Attraction and developers of LNX_Studio, a cross platform, customizable, networked DAW written in the SuperCollider programming language.  Please see the end of the article for links to LNX_Studio and Lorenz Attraction's music!

by Scott Petersen at June 15, 2016 05:09 PM

June 13, 2016

digital audio hacks – Hackaday

Ball Run Gets Custom Sound Effects

Building a marble run has long been on my project list, but now I’m going to have to revise that plan. In addition to building an interesting track for the orbs to traverse, [Jack Atherton] added custom sound effects triggered by the marble.

I ran into [Jack] at Stanford University’s Center for Computer Research in Music and Acoustics booth at Maker Faire. That’s a mouthful, so they usually go with the acronym CCRMA. In addition to his project there were numerous others on display and all have a brief write-up for your enjoyment.

[Jack] calls his project Leap the Dips which is the same name as the roller coaster the track was modeled after. This is the first I’ve heard of laying out a rolling ball sculpture track by following an amusement park ride, but it makes a lot of sense since the engineering for keeping the ball rolling has already been done. After bending the heavy gauge wire [Jack] secured it in place with lead-free solder and a blowtorch.

As mentioned, the project didn’t stop there. He added four piezo elements which are monitored by an Arduino board. Each is at a particularly extreme dip in the track which makes it easy to detect the marble rolling past. The USB connection to the computer allows the Arduino to trigger a MaxMSP patch to play back the sound effects.

For the demonstration, Faire goers wear headphones while letting the balls roll, but in the video below [Jack] let me plug in directly to the headphone port on his Macbook. It’s a bit weird, since there no background sound of the Faire during this part, but it was the only way I could get a reasonable recording of the audio. I love the effect, and think it would be really fun packaging this as a standalone using the Teensy Audio library and audio adapter hardware.


Filed under: cons, digital audio hacks

by Mike Szczys at June 13, 2016 06:31 PM

Synchronize Data With Audio From A $2 MP3 Player

Many of the hacks featured here are complex feats of ingenuity that you might expect to have emerged from a space-age laboratory rather than a hacker’s bench. Impressive stuff, but on the other side of the coin the essence of a good hack is often just a simple and elegant way of solving a technical problem using clever lateral thinking.

Take this project from [drtune], he needed to synchronize some lighting to an audio stream from an MP3 player and wanted to store his lighting control on the same SD card as his MP3 file. Sadly his serial-controlled MP3 player module would only play audio data from the card and he couldn’t read a data file from it, so there seemed to be no easy way forward.

His solution was simple: realizing that the module has a stereo DAC but a mono amplifier he encoded the data as an audio FSK stream similar to that used by modems back in the day, and applied it to one channel of his stereo MP3 file. He could then play the music from his first channel and digitize the FSK data on the other before applying it to a software modem to retrieve its information.

There was a small snag though, the MP3 player summed both channels before supplying audio to its amplifier. Not a huge problem to overcome, a bit of detective work in the device datasheet allowed him to identify the resistor network doing the mixing and he removed the component for the data channel.

He’s posted full details of the system in the video below the break, complete with waveforms and gratuitous playback of audio FSK data.

This isn’t the first time we’ve featured audio FSK data here at Hackaday. We’ve covered its use to retrieve ROMs from 8-bit computers, seen it appearing as part of TV news helicopter coverage, and even seen an NSA Cray supercomputer used to decode it when used as a Star Trek sound effect.


Filed under: digital audio hacks

by Jenny List at June 13, 2016 03:31 PM

Hackaday Prize Entry: 8-Bit Arduino Audio for Squares

A stock Arduino isn’t really known for its hi-fi audio generating abilities. For “serious” audio like sample playback, people usually add a shield with hardware to do the heavy lifting. Short of that, many projects limit themselves to constant-volume square waves, which is musically uninspiring, but it’s easy.

[Connor]’s volume-control scheme for the Arduino bridges the gap. He starts off with the tone library that makes those boring square waves, and adds dynamic volume control. The difference is easy to hear: in nature almost no sounds start and end instantaneously. Hit a gong and it rings, all the while getting quieter. That’s what [Connor]’s code lets you do with your Arduino and very little extra work on your part.

The code that accompanies the demo video (which is embedded below) is a good place to start playing around. The Gameboy/Mario sound, for instance, is as simple as playing two tones, and making the second one fade out. Nonetheless, it sounds great.

Behind the scenes, it uses Timer 0 at maximum speed to create the “analog” values (via PWM and the analogWrite() command) and Timer 1 to create the audio-rate square waves. That’s it, really, but that’s enough. A lot of beloved classic arcade games didn’t do much more.

While you can do significantly fancier things (like sample playback) with the same hardware, the volume-envelope-square-wave approach is easy to write code for. And if all you want is some simple, robotic-sounding sound effects for your robot, we really like this approach.

The HackadayPrize2016 is Sponsored by:

Filed under: Arduino Hacks, digital audio hacks, The Hackaday Prize

by Elliot Williams at June 13, 2016 05:01 AM

June 10, 2016

open-source – CDM Create Digital Music

Music thing’s Turing Machine gets a free Blocks version

We already saw some new reasons this week to check out Reaktor 6 and Blocks, the software modular environment. Here’s just one Blocks module that might get you hooked – and it’s free.

“Music thinking Machines,” out of Berlin, have built a software rendition of Music Thing’s awesome Turing Machine Eurorack module (created by Tom Whitwell). As that hardware is open source, and because what you can do in wiring you can also do in software, it was possible to build software creations from the Eurorack schematics.

The beauty of this is, you get the Turing Machine module in a form that lets you instantly control other Reaktor creations – as well as the ability to instantiate as many modules as you want without the aid of a screwdriver or waiting for a DHL delivery to arrive. (Hey, software has some advantages.) I don’t so much see it reducing the appeal of the hardware, either, as it makes me covet the hardware version every time I open up the Reaktor ensemble.

And the module is terrific. In addition to the Turing Machine Mk 2, you get the two Mk 2 expanders, Volts and Pulses.

The Turing Machine Mk 2 is a random looping sequencer – an idea generator that uses shift registers to make melodies and rhythms you can use with other modules. It’s also a fun build. But now, you can use that with the convenience of Reaktor.

Pulses and Voltages expanders add still more unpredictability. Pulses is a random looping clock divider, and Voltages is a random looping step sequencer. I also like the unique front panels made just for the Reaktor version … I wonder if someone will translate that into actual hardware.

The idea is to connect them together: take the 8 P outputs from the Turing Machine and connect them to the 8 P inputs on Pulses (for pulses), and then do the same with the voltage inputs and outputs on Volts. You can also make use, as the example ensemble does, of a Clock and Clock Divider module included by default in Reaktor 6’s Blocks collection.

With controls for probability and sequence length, you can put it all together and have great fun with rhythms and tunes.

Download the Reaktor ensemble:

Turing Machine Mk2 plus Pulses and Volts Expanders [Reaktor User Library]

Here’s what the original modules look like in action:

Find out more:

https://github.com/TomWhitwell/TuringMachine/

Also worth a read (especially now with this latest example of what open source hardware can mean – call it free advertising in software form, not to mention a cool project):
Why open source hardware works for Music Thing Modular

Oh, and if you want to go the opposite direction, Tom also recently wrote a tutorial on writing firmware for the Mutable Clouds module. The old software/hardware line is more blurred than ever, as make software versions of hardware that then interfaces with hardware and back to hardware again and hardware also runs software. (Whew.)

Turing Machine Controls
Prob: Determines the probability of a bit being swapped from 0 to 1 (or viceversa).
All right locks the sequence of bits, all left locks the sequence in a “mobius loop” mode.
Length: Sets the length of the sequence Scale: Scales the range of the pitch output +/-: Writes a 1 or a 0 bit in the shift register AB: Modulation inputs

Pulses Expander Controls
Output: Selects 1 of the 11 gated outputs

Volts Expander Controls
1 till 5: Controls the voltage of active bit

For more detailed information of how the turing machine works please visit the Music Thing
website: https://github.com/TomWhitwell/TuringMachine/

Music Thinking Machines
Berlin

The post Music thing’s Turing Machine gets a free Blocks version appeared first on CDM Create Digital Music.

by Peter Kirn at June 10, 2016 04:37 PM

Libre Music Production - Articles, Tutorials and News

John Option release debut album, "The cult of John Option"

John Option release debut album,

John Option have just released "The cult of John Option". This is their debut album and it brings together all their singles published in the past few months, including remix versions.

As always, John Option's music is published under the terms of the Creative Commons License (CC-BY-SA) and is produced entirely using free software.

by Conor at June 10, 2016 01:30 PM

June 09, 2016

GStreamer News

GStreamer Core, Plugins, RTSP Server, Editing Services, Python, Validate, VAAPI 1.8.2 stable release

The GStreamer team is pleased to announce the second bugfix release in the stable 1.8 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.8.1. For a full list of bugfixes see Bugzilla.

See /releases/1.8/ for the full release notes.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, or gstreamer-vaapi, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, or gstreamer-vaapi.

June 09, 2016 10:00 AM

June 06, 2016

open-source – CDM Create Digital Music

Ableton hacks: Push 2 video output and more

For years, the criticism of laptops has been about their displays – blue light on your face and that sense that a performer is checking email. But what if the problem isn’t the display, but the location of the display? Because being able to output video to your hardware, while you turn knobs and hit pads, could prove pretty darned useful.

Push 2 video output

And so that makes this latest hack really cool. 60 fps(!) video can now stream over a USB cable to Ableton’s Push 2 hardware. You’ll need some way of creating that video texture, but that’s there in Max for Live’s Jitter objects.

David Butler’s imp.push object, out last week, makes short work of this.

The ingredients that made this possible:
1. Ableton’s API documentation for Push 2, available now on GitHub thanks to Ableton and a lot of hard work by Ralf Suckow.

2. libusb

Learn more at this blog post:
imp.push Beta Released

Get the latest version (or collaborate) at GitHub

Next up on his to-do list – what to do with those RGB pads.

Here’s an impressive video from Cycling ’74 — ask.audio scooped us on this story last week, hat tip to them.

Thanks to Bjorn Vayner for the tip!

ubermap

Push 2 mappings

And while you’re finding cool stuff to do to expand your Push 2 capabilities, don’t miss this free set of scripts.

Ubermap is a free and open source script for Push 2 designed to let you map VST and AU plug-ins to your Push controller. What’s great about this is that there’s no middle man – nothing like Komplete Kontrol running between you and your plug-in, just direct mapping of parameters. It’s not as powerful or extensive as the Isotonik tool we covered last week, and it’s limited to Push 2 (with some Push 1 support), so you’ll still want to go that route if you fancy using other controller hardware. But the two can be viewed as complementary, particularly as all of this is possible because of Ableton’s API documentation.

You can find the scripts on the Ableton forum:

Ubermap for Push 2 (VST/AU parameter remapping)

There are links there to more documentation and tips on configuration of various plug-ins. Or to grab everything directly, head to GitHub:

http://bit.ly/ubermap-src

Now, let’s hope this paves the way for more native support in future releases of Live, and some sort of interface for doing this in the software without custom scripts. But there’s no reason to wait – these solutions do work now.

Previously:

Ableton just released every last detail of how Push 2 works

You can now access the Push 2 display from Max

Ableton hacks: map anything, even Kontakt and Reaktor

The post Ableton hacks: Push 2 video output and more appeared first on CDM Create Digital Music.

by Peter Kirn at June 06, 2016 03:28 PM

June 03, 2016

blog4

Embedded Artist Berlin concert 3.6.2016

After the great concert last week in Linz during the Amro festival at Stadtwerkstatt, we play as Embedded Artist tonight in Berlin at Ausland:
http://ausland-berlin.de/embedded-artist-antez-morimoto

by herrsteiner (noreply@blogger.com) at June 03, 2016 12:50 AM

June 01, 2016

Libre Music Production - Articles, Tutorials and News

LMP Asks #17: An interview with Frank Piesik

LMP Asks #17: An interview with Frank Piesik

This month we talked with Frank Piesik, a musician, inventor and educator living in Bremen.

Hi Frank, thanks for talking with us! First, can you tell us a little about yourself?

by Scott Petersen at June 01, 2016 02:05 PM

Contest: Win an amazing MOD Duo!

Contest: Win an amazing MOD Duo!

To commemorate the last batch shipment to kickstarter backers MOD Devices have set up a social media contest to give away a MOD Duo, the hardware stompbox which runs on linux and a whole ecosystem of FLOSS audio plugins.

by Conor at June 01, 2016 10:46 AM

May 31, 2016

Linux – CDM Create Digital Music

iZotope Mobius and the crazy fun of Shepard Tones

I always figure the measure of a good plug-in is, you want to tell everyone about it, but you don’t want to tell everyone about it, because then they’ll know about it. iZotope’s Möbius is in that category for me – it’s essentially moving filter effect. And it’s delicious, delicious candy.

iZotope have been on a bit of a tear lately. The company might be best known for mastering and restoration tools, but in 2016, they’ve had a series of stuff you might build new production ideas around. And I keep going to their folder in my sets. There’s the dynamic delay they built – an effect so good that you’ll overlook the fact that the UI is inexplicably washed out. (I just described it to a friend as looking like your license expired and the plug-in was disabled or something. And yet… I think there’s an instance of it on half the stuff I’ve made since I downloaded it.)

More recently, there was also a a plug-in chock full of classic vocal effects.

iZotope Möbius brings an effect largely used in experimental sound design into prime time.

At its core is a perceptual trick called the “Shepard Tone” (named for a guy named Shepard). Like the visual illusion of stripes on a rotating barber pole, the sonic illusion of the Shepard Tone (or the continuously-gliding Shepard–Risset glissando) is such that you perceive endlessly rising motion.

Here, what you should do for your coworkers / family members / whatever is definitely to turn this on and let them listen to it for ten hours. They’ll thank you later, I’m sure.

The Shepard Tone describes synthesis – just producing the sound. The Möbius Filter applies the technique to a resonant filter, so you can process any existing signal.

Musical marketing logic is such that of course you’re then obligated to tell people they’ll want to use this effect for everything, all the time. EDM! Guitars! Vocals! Whether you play the flugelhorn or are the director of a Bulgarian throat signing ensemble, Möbius Filter adds the motion and excitement every track needs!

And, uh, sorry iZotope, but as a result I find the sound samples on the main page kind of unlistenable. Of course, taste is unpredictable, so have a listen. (I guess actually this isn’t a bad example of a riser for EDM so much as me hating those kinds of risers. But then, I like that ten hours of glissandi above, so you probably shouldn’t listen to me.)

https://www.izotope.com/en/products/create-and-design/mobius-filter/sounds.html

Anyway, I love the sound on percussion. Here’s me messing around with that, demonstrating the ability to change direction, resonance, and speed, with stereo spatialization turned on:

The ability to add sync effects (and hocketing, with triplet or dotted rhythms) for me is especially endearing. And while you’ll tire quickly of extreme effects, you can certainly make Möbius Filter rather subtle, by adjusting the filter and mix level.

Möbius Filter is US$49 for most every Mac and Windows plug-in format. A trial version is available.

screenshot_438

https://www.izotope.com/en/products/create-and-design/mobius-filter.html

It’s worth learning more about the Shepard and Risset techniques in general, though – get ready for a very nice rabbit hole to climb down. Surprisingly, the Wikipedia article is a terrific resource:

Shepard tone

If you want to try coding your own shepard tone synthesis, you can do so in the free and open source, multi-platform environment SuperCollider. In fact, SuperCollider is what powered the dizzying musical performance by Marcus Schmickler CDM co-hosted with CTM Festival last month here in Berlin. Here’s a video tutorial that will guide you through the process (though there are lots of ways to accomplish this).

The technique doesn’t stop in synthesis, though. Just as the same basic perceptual trick can be applied to rising visuals and rising sounds, it can also be used in rhythm and tempo – which sounds every bit as crazy as you imagine. Here’s a description of that, with yet more SuperCollider code and a sound example using breaks. Wow.

Risset rhythm – eternal accelerando

Finally, the 1969 rendition of this technique by composer James Tenney is absolutely stunning. I don’t know how Ann felt about this, but it’s titled “For Ann.” (“JAMES! MY EARS!” Okay, maybe not; maybe Ann was into this stuff. It was 1969, after all.) Thanks to Jos Smolders for the tip.

Good times.

So, between Möbius Filter and SuperCollider, you can pretty much annoy anyone. I’m game.

https://supercollider.github.io

The post iZotope Mobius and the crazy fun of Shepard Tones appeared first on CDM Create Digital Music.

by Peter Kirn at May 31, 2016 07:11 PM

Scores of Beauty

Music Encoding Conference 2016 (Part 1)

About a year a go I posted a report of my first appearance at the Music Encoding Conference that viagra for sale had taken place in Florence (Italy). I then introduced the idea of interfacing LilyPond with MEI, the de facto standard in (academic) digital music edition and was very grateful to be welcomed warmly by that scholarly community. Over the past year this idea became increasingly concrete, and so I’m glad that German research funds made it possible obat kuat cialis 5 mg to present another paper at this year’s conference although Montréal (Canada) isn’t exactly around the corner. In a set of two posts I will talk about my my impressions in general (current post) and my paper and other LilyPond-related aspects (next post).

The MEI (which stands for Music Encoding Initiative, which is both a community and as a format specification) is a quite small and friendly community, although it basically represents the Digital Humanities branch of musicology as a whole. As a consequence it’s nice to see many people again on this yearly convention. There were 67 registered participants from 10 countries with rather strong focus on north America and central Europe (last year in Florence I think we were around 80).

The MEC is a four day event with day two and three being dedicated to actual paper presentations. The first day features workshops while the fourth day is an “unconference day” giving the opportunity for spontaneous or pre-arranged discussion and collaboration. A sub-event that seems to gain relevance each year is the conference banquet – one could even imagine that by now this plays a role when applying for organizing the next MECs 😉 . We had a nice dinner at the Auberge Saint Gabriel with excellent food and wine and an extremely high noise floor that I attribute to the good mood and spirit we all had. And on the last evening we had the chance to attend a lecture recital with Karen Desmond and the VivaVoce ensemble who gave us a commented overview of the history of notation from around 900 to the late 16th century.

Ensemble VivaVoce and Karen Desmond (click to enlarge)

Ensemble VivaVoce and Karen Desmond (click to enlarge)

Verovio Workshop

From the workshops I decided to attend Verovio – current status and future directions, which was partly a presentation of the tool itself and its latest development, but also a short hands-on introductory tutorial (OK, “hands-on” was limited to having the files available to look through and modify the configuration variables). Verovio is currently “the” tool of choice for displaying scores in digital music editions, so it’s obvious that I’m highly interested in learning more about it. Basically it is a library that renders MEI data to scores in SVG files, with a special feature being that the DOM structure of the SVG file matches that of the original MEI, which makes it easy to establish two-way links between source and rendering. Verovio is written in C++ and compiled to a number of target environments/languages. The most prominent one is JavaScript through which Verovio provides real-time engraving in the browser. You should consider having a look at the MEI Viewer demonstration page.

Screenshot from the Verovio website, showing the relation of rendering and source structure (click to enlarge)

Screenshot from the Verovio website, showing the relation of rendering and source structure (click to enlarge)

Verovio’s primary focus is on speed and flexibility, and what can I say? it’s amazing! Once the library and the document have been downloaded the score is rendered and modified near-instantly with a user experience matching ordinary web browsing. It is possible to resize and navigate a score in real-time while with instant reflow. Score items can easily be accessed through JavaScript and may be used to write back any actions to the original source file. And as we’re in the XML domain throughout you can do cool things like rendering remotely hosted scores or extracting parts through XSL transformations and queries. A rather new feature is MIDI playback with highlighting of the played notes. The MIDI player is linked quite tightly into the document, so you can use the scrollbar or click on notes to jump playback with everything being robustly in sync.

Of course this performance comes at over the counter viagra a cost: as Verovio is tuned to speed and flexibility its engraving engine is of course rather simplistic. And apart from the fact that it doesn’t support everything yet that a notation program would need it will probably never compete with LilyPond in terms of engraving quality. On the other hand LilyPond will probably never compete with Verovio on it’s native qualities speed and flexibility. This boils down to Verovio and LilyPond rather being perfect complements than competitors. They should be able to happily coexist side by side – within the same editing project or even editing environment. But I’ll get back to that in the other post.

Paper Presentations

Days two viagra overdose and three were filled with paper presentations and posters, and I can hardly give a comprehensive account of everything. Instead I have to pick a few things and make some remarks from a somewhat LilyPond-ish perspective.

Our nice conference hall (presentation by Reiner Krämer).

Our nice conference hall. “Cope events” are somewhat like MIDI wrapped in LISP (click to view full image)

Metadata and Linked Data

Generally speaking the MEI has two independent objectives: music editing and metadata. The original inventor of MEI, Perry Roland, is actually a librarian, and so documenting everything about sources is an inherent goal in the MEI world. Typical projects in that domain might be the cataloguing of a historic library such as the Sources of the Detmold Court Theatre Collection (German only).

But encoding the phyisical sources alone isn’t as good as it gets without considering the power of linking data. There are numerous items in such a house that may refer to each other and provide additional information: bills, copyist’s marks, evening programmes, comments and modifications to individual copies of the music, and much more. Making this kind of information retrievable, possibly across projects, promises new areas of research.

Encoding enhanced data specifying concrete performances of a work is another related area of research. Starting from focusing on secondary information like inscriptions in the performance material existing approaches go all the way to designing systems to encode timing, articulation and dynamics from buy viagra now recorded music as was presented by Axel Berndt. Still far away from analyzing their data directly from the recording it seems a very promising project to provide a solid data foundation to investigate parameters of “musical” performance like for example determining a “rubato fingerprint” for a given pianist. Of course this also works in the other direction, and we heard a MIDI rendering of a string quartet featuring astonishing livelyhood. I’d be particularly interested to see if that technology couldn’t be built upon for notation editors’ playback engines.

Extending the Scope of MEI

An ubiquitous topic on the side actual http://cialistadalafil-onlinerx.com/ of music encoding is how to deal with specific repertoire that isn’t covered by Common Western Music Notation. As MEI is so flexible and open it is always possible to create project-specific customizations to include the notation repertoire at hand. But the freedom also implies the risk of becoming too widely split to an amount where it might become meaningless. This is why it is so important to regularly discuss these things in the wider MEI community.

The top targets in this area seem to be neumes and lute (and other) tablature systems, while I didn’t see any attempts towards encoding contemporary or non-western notation styles so far.

Edition Projects

Of course there also were presentations of actual edition projects, of which I’ll mention just a few.

Neuma is a digital library of music scores encoded in MEI (and partially still MusicXML). It features searching by phrases, and the scores can be referenced to viagra nebo cialis be rendered anywhere with Verovio (as described above). They have also been working with LilyPond and would be happy to have this as an additional option for presenting higher quality renderings of their scores and incipits.

Johannes Kepper gave an insightful and also amusing presentation about the walls they ran into with their digital Freischütz edition. This project actually pushed the limits of digital music edition pretty hard and can be used as a reference of approaches and limitations equally. Just imagine that their raw data is about 230 MB worth of XML files – out of which approximately 100 MB count for the encoding of the autograph manuscript alone …

A poster was dedicated to the “genetic edition” of Beethoven’s sketches. This project sets out to encode the genetic process that can be retraced in the manuscript sources giving access to each step of Beethoven’s working process individually.

Salsah is a project at the Digital Humanities Lab at the Basel university. They work on an online presentation of parts of the Anton Webern Gesamtausgabe, namely the sketches (while the “regular” works are intended to be published as a traditional print-only edition). The project is still in the prototype stage, but it has to be said that it is fighting somewhat desparately with its data. The Webern edition is realized using Finale – and the exported MusicXML isn’t exactly suited to semantically make sense of … Well, they would have had the solution at their fingertips, but two and a half years ago I wasn’t able to convince them to switch to LilyPond before publishing the first printed volumes 😉


After these more general observations a second post will go into more detail about LilyPond specific topics, namely MEI’s lack of a professional engraving solution, my own presentation, and nCoda, a new editing system that was presented for the first time at the MEC (incidentally just two days after the flashy and heavily pushed Dorico announcement). I have been in touch with the nCoda developers for over a year now, and it was very nice and fruitful to have a week together in person – but that’s for the next post …

by Urs Liska at May 31, 2016 06:44 AM

May 28, 2016

A touch of music

Modeling rhythms using numbers - part 2

This is a continuation of my previous post on modeling rhythms using numbers.

Euclidean rhythms

The Euclidean Rhythm in music was discovered by Godfried Toussaint in 2004 and is described in a 2005 paper "The Euclidean Algorithm Generates Traditional Musical Rhythms". The greatest common divisor of two numbers is used rhythmically giving the number of beats and silences, generating the majority of important World Music rhythms.

Do it yourself

You can play with a slightly generalized version of euclidean rhythms in your browser  using a p5js based sketch I made to test my understanding of the algorithms involved. If it doesn't work in your preferred browser, retry with google chrome.  

The code

The code may still evolve in the future. There are some possibilities not explored yet (e.g. using ternary number systems instead of binary to drive 3 sounds per circle). You can download the full code for the p5js sketch on github

screenshot of the p5js sketch running. click the image to enlarge

The theory

So what does it do and how does it work? Each wheel contains a number of smaller circles. Each small circle represents a beat. With the length slider you decide how many beats are present on a wheel.  

Some beats are colored dark gray (these can be seen as strong beats), whereas other beats are colored white (weak beats). To strong and weak beats one can assign a different instrument. The target pattern length decides how many weak beats exist between the strong beats. Of course it's not always possible to honor this request: in a cycle with a length of 5 beats and a target pattern length of 3 beats (left wheel in the screenshot) we will have a phrase of 3 beats that conforms to the target pattern length, and a phrase consisting of the 2 remaining beats that make a "best effort" to comply to the target pattern length. 

Technically this is accomplished by running Euclid's algorithm. This algorithm is normally used to calculate the greatest common divisor between two numbers, but here we are mostly interesting in the intermediate results of the algorithm. In Euclid's algorithm, to calculate the greatest common divisor between an integer m and a smaller integer n, the smaller number n is repeatedly subtracted from the greater until the greater is zero or becomes smaller than the smaller, in which case it is called the remainder. This remainder is then repeatedly subtracted from the smaller number to obtain a new remainder. This process is continued until the remainder is zero. When that happens, the corresponding smaller number is the greatest common divisor between the original two numbers n and m.

Let's try it out on the situation of the left wheel in the screenshot. The greater number m is 5 (length) and the smaller number n is 3 (target pattern length). Now the recipe says to repeatedly subtract 3 from 5 until you get something smaller than 3. We can do this exactly once:

5 - (1).3 = 2

We can rewrite this as:

5 = (1).3 + 2

This we can interpret as: the cycle of 5 beats is to be decomposed as 1 phrase with 3 beats, followed by a phrase with 2 beats (the remainder). Each phrase consists of a single strong beat followed by all weak beats. In a symbolic representation easier read by musicians one might write: x..x. (In the notation of the previous part of this article one could also write 10010).

Euclid's algorithm doesn't stop here. Now we have to repeatedly subtract the remainder 2 from the smaller number 3:

3 = (1).2 + 1

This in turn can be read as: the phrase of 3 beats can be further decomposed as 1 phrase of 2 beats followed by a phrase consisting of 1 beat. In a symbolic representation: x.x Euclid continues:

2 = (2).1 + 0

The phrase of two beats can be represented symbolically as: xx. We've reached remainder 0 and Euclid stops: apparently the greatest common divisor between 5 and 3 is 1.

Now it's time to realize what we really did: 
  • We decomposed a phrase of 5 beats in a phrase of 3 beats and a phrase of 2 beats making a rhythm x..x. 
  • Then we further decomposed the phrase of 3 beats into a phrase of 2 beats followed by a phrase of 1 beat. 
  • We can substitute this refined 3 beat phrase in our original rhythm of 5 = 3+2 beats to get a rhythm consisting of 5 = (2 + 1) + 2 beats: x.xx. 
  • I hope it's clear by now that by choosing how long to continue using Euclid's algorithm, we can decide how fine-grained we want our rhythms to become. 
  • This is where the max pattern length slider comes into play. 
The length slider and the target pattern slider will determine a rough division between strong and weak beats by running Euclid's algorithm just once, whereas the max pattern length slider helps you decide how long to carry on Euclid's algorithm to further refine the generated rhythm.


by Stefaan Himpe (noreply@blogger.com) at May 28, 2016 02:22 PM

May 24, 2016

digital audio hacks – Hackaday

Secret Listening to Elevator Music

While we don’t think this qualifies as a “fail”, it’s certainly not a triumph. But that’s what happens when you notice something funny and start to investigate: if you’re lucky, it ends with “Eureka!”, but most of the time it’s just “oh”. Still, it’s good to record the “ohs”.

Gökberk [gkbrk] Yaltıraklı was staying in a hotel long enough that he got bored and started snooping around the network, like you do. Breaking out Wireshark, he noticed a lot of UDP traffic on a nonstandard port, so he thought he’d have a look.

A couple of quick Python scripts later, he had downloaded a number of the sample packets and decoded them into hex and found the signature for LAME, an MP3 encoder. He played around with byte offsets until he got a valid MP3 file out, and voilà, the fantastic reveal! It was the hotel’s elevator music stream — that he could hear outside in the corridor with much less effort. (Sad trombone.)

But just because nothing came up this time doesn’t mean that nothing will come up next time. And it’s important to keep your skills sharp for when you really need them. We love following along with peoples’ reverse engineering efforts, whether or not they end up finding anything. What oddball signals have you found lately?

Thanks [leonardo] for the tip! Wireshark graphic from Softpedia’s entry on Wireshark. Simulated-phosphor audio display by Oona [windytan] Räisänen (check that out!).


Filed under: digital audio hacks, security hacks, slider

by Elliot Williams at May 24, 2016 08:01 AM

May 22, 2016

aubio

Install aubio with pip

You can now install aubio's python module using pip:

$ pip install git+git://git.aubio.org/git/aubio

This should work for Python 2.x and Python 3.x, on Linux, Mac, and Windows. Pypy support is on its way.

May 22, 2016 01:00 PM

May 17, 2016

OSM podcast

May 14, 2016

Libre Music Production - Articles, Tutorials and News

EMAP - a GUI for Fluidsynth

EMAP - a GUI for Fluidsynth

EMAP (Easy Midi Audio Production) is a graphical user interface for the Fluidsynth soundfont synthesizer. It functions as a Jack compatible:

by admin at May 14, 2016 04:12 PM

May 11, 2016

Pid Eins

CfP is now open

The systemd.conf 2016 Call for Participation is Now Open!

We’d like to invite presentation and workshop proposals for systemd.conf 2016!

The conference will consist of three parts:

  • One day of workshops, consisting of in-depth (2-3hr) training and learning-by-doing sessions (Sept. 28th)
  • Two days of regular talks (Sept. 29th-30th)
  • One day of hackfest (Oct. 1st)

We are now accepting submissions for the first three days: proposals for workshops, training sessions and regular talks. In particular, we are looking for sessions including, but not limited to, the following topics:

  • Use Cases: systemd in today’s and tomorrow’s devices and applications
  • systemd and containers, in the cloud and on servers
  • systemd in distributions
  • Embedded systemd and in IoT
  • systemd on the desktop
  • Networking with systemd
  • … and everything else related to systemd

Please submit your proposals by August 1st, 2016. Notification of acceptance will be sent out 1-2 weeks later.

If submitting a workshop proposal please contact the organizers for more details.

To submit a talk, please visit our CfP submission page.

For further information on systemd.conf 2016, please visit our conference web site.

by Lennart Poettering at May 11, 2016 10:00 PM