planet.linuxaudio.org

May 24, 2015

Libre Music Production - Articles, Tutorials and News

May 2015 - Interview, Tutorial and more

Our newsletter for May is now sent to our subscribers. If you have not yet subscribed, you can do that from our start page.

You can also read the latest issue online. In it you will find:

  • 'LMP Asks' interview with Florian Bador
  • How to use outboard gear in Ardour tutorial
  • New software release announcements

and more!

by admin at May 24, 2015 08:20 PM

May 22, 2015

Create Digital Music » Linux

Cool Things Chrome Can Do Now, Thanks to Hardware MIDI

heisenberg

Plugging a keyboard or drum pads into your Web browser is now a thing.

One month ago, we first saw hardware MIDI support in Chrome. That was a beta; this week, Google pushed it out to all Chrome users.

So, what can you actually do with this stuff? Well, you can open a Web tab and play a synth on actual hardware, which is pretty nifty.

Support is still a little dicey, but the available examples are growing fast. Here are some of the coolest, in addition to the MIDI example and demo code we saw last month.

The examples are certainly promising, but you may want to temper expectations. Users of browser-based solutions built on Flash will find some of this old news. Audiotool, for one, has already had a really sophisticated (semi-modular, even) production tool running for some years. (It’s relevant here that Audiotool is coming to the HTML5/MIDI support, but it isn’t here yet.) And while open standards are supposed to mean more compatibility, in practice, they are presently meaning far less. Even though Safari and Chrome are pretty close to one another in rendering pages, I couldn’t get any of these examples working properly in any browser other than Chrome. And while I could get pretty low-latency functionality, none of this is anywhere near as solid in terms of sound performance as any standalone music software.

So, that leaves two challenges. One, the implementation is going to have to improve if non-developers are going to start to use this. And two, if this stuff is going to see the light of day beyond music hackathons, it’ll need some applications. That said, I could imagine educational applications, demos of apps, collaborative possibilities, and more – and those expand if the tech improves. And, of course, this also gets really interesting on inexpensive Chromebooks – which it seems are selling in some numbers these days.

But that’s the future. Here are some of the things you can do right now:

audiotool

Audiotool is coming to HTML5, and Heisenberg is here now. Heisenberg is I think the coolest option yet – more than just a tech demo, you can plug in a MIDI keyboard and it’s a really fun, free browser synth. Given the amount of pleasure we’ve gotten out of the odd Web time-waster, this is serious business.

But that’s just the appetizer. The team behind Audiotool are working on porting it to HTML5. That should be an excellent test of just how mature this technology is. Audiotool is great and – Flash or not – it’s worth having a play with if you are the kind of person who gets some inspiration from new software toys. (And if you’re reading this far, I suspect you are.)

http://www.audiotool.com/product/device/heisenberg/

http://www.audiotool.com/app [Flash for now, including screenshot above]

js106

Revisit Roland. Steven Goldberg’s 106.js reimagines the classic Roland Juno-106 in JavaScript. And it’s just added MIDI support. Plus you can check the code out, free.

http://resistorsings.com/106/

GitHub

yamahaclone

Play a 60s Yamaha combo organ. The oddest of this bunch is also my favorite sonically, just because it’s so quirky. The Foo YC20 is an emulation of Yamaha’s 1969 organ, the YC-20 combo – “features and flaws” all included. And now it feels more like an organ, since you can connect a MIDI keyboard.

Users should like it: if you’re not fond of running it in your browser, you can also run it as a VST plug-in for Mac or Windows or standalone or as an LV2 plug-in on Linux.

Developers will like it, too: apart from some surprisingly authentic open source recreations, it’s all coded in the Faust programming language, a functional language for DSP.

http://foo-yc20.codeforcode.com

hyaio

Run a full modular DAW. No need to wait on Audiotool: app.hya.io is already a full-featured semi-modular DAW built in HTML5 with MIDI support (and audio input). It’s got a full assortment of instruments and effects, too – and some interesting ones, so it complements Audiotool.

http://app.hya.io/

websynths

Run a bunch of microtonal synths. Mitch Wells’ Web Synths is a deep microtonal instrument, capable of some unique sound designs, and perhaps the richest actual synth of this bunch. Patch sharing shows one powerful feature of putting browsers on the Web – the ability to share with others.

http://www.websynths.com/

vult

Live-code your own synth. Maybe this is the application that makes the most sense. While it’s tough for the other proof-of-concept toys to compete with your desktop instruments, it’s kind of tough to beat the ability to live-code with Web tech in a browser.

And by “code,” you hardly have to be a hard-core coder. The coding is radically simplified here, spitting out JavaScript from basic commands – fun for even the most entry-level hacker to play around with.

Vult by Leonardo Laguna Ruiz was built at MIDIHACK, the hackathon I was part of here in Berlin this month.

http://modlfo.github.io/vult/demo.html

synthy

Play a synth – with colored lights and more. Synthy.io is a three-oscillator synth with some interesting extras. There’s a tracker-sequencer built in, and you can play a “live” mode with color output.

The nerdy stuff behind the scenes demonstrates some potential for other projects. Apart from the new MIDI mode, the server mode offers up other possibilities. (socket.io, Node.js, live server, NeDB database holding patterns, if you’re curious.)

What does that mean in practice? Developer Filip Hnízdo writes in comments:

“One of the features I’m most proud of is the live websocket server so any pattern that gets pushed to it is played live to a page where anyone can hear what anyone else has created in realtime. Especially fun with MIDI routed into soft synths or hardware. If enough people pushed patterns in you could just leave it on in your bedroom and constantly hear new music as it arrives. The patterns are all encoded as URLS too so easy to share.”

Having just read a history of the first networked, first first-person shooter in the 70s, it’s worth saying: this stuff can lead to unexpected places. And Filip is looking for collaborators.

http://synthy.io/

Got more for us? Let us know in comments.

And if you have any tips on audio performance or how this is developing (since I complained about that), or likely applications (since I mused about that), we’d love to hear that, too.

The post Cool Things Chrome Can Do Now, Thanks to Hardware MIDI appeared first on Create Digital Music.

by Peter Kirn at May 22, 2015 05:24 PM

May 16, 2015

linux.autostatic.com » linux.autostatic.com

Raspberry Pi Revisited

When the Raspberry Pi 2 was released I certainly got curious. Would it be really better than it's little brother? As soon as it got available in The Netherlands I bought it and sure this thing flies compared to the Raspberry Pi 1. The four cores and 1GB of memory are certainly an improvement. The biggest improvement though is the shift from ARMv6 to ARMv7. Now you can really run basically anything on it and thus I soon parted from Raspbian and I'm now running plain Debian Jessie armhf on the RPi.

So is everything fine and dandy with the RPi2? Well, no. It still uses the poor USB implementation and audio output. And it was quite a challenge to prepare it for its intended use: a musical instrument. To my great surprise a new version of the Wolfson Audio Card was available too for the new Raspberry Pi board layout so as soon as people reported they got it to work with the RPi2 I ordered one too.


Cirrus Logic Audio Card for Raspberry Pi

One of the first steps to make the device suitable for use as a musical device was to build a real-time kernel for it. Building the kernel itself was quite easy as the RT patchset of the kernel being used at the moment by the Raspberry Foundation (3.18) applied cleanly and it also booted without issues. But after a few minutes the RPi2 would lock up without logging anything. Fortunately there were people on the same boat as me and with the help of the info and patches provided by the Emlid community I managed to get my RPi2 stable with a RT kernel.

Next step was to get the right software running so I dusted off my RPi repositories and added a Jessie armhf repo. With the help of fundamental the latest version of ZynAddSubFX now runs like charm with very acceptable latencies, when using not all too elaborate instrument patches Zyn is happy with an internal latency of 64/48000=1.3ms. I haven't measured the total round-trip latency but it probably stays well below 10ms. LinuxSampler with the Salamander Grand Piano sample pack also performs a lot better than on the RPi1 and when using ALSA directly I barely get any underruns with a slightly higher buffer setting.

I'd love to get Guitarix running on the RPi2 with the Cirrus Logic Audio Card so that will be the next challenge.

by Jeremy at May 16, 2015 03:57 PM

May 14, 2015

Libre Music Production - Articles, Tutorials and News

Open Music Contest announces crowdfunding campaign

M.eik Michalke and his team have decided to revive the Open Music Contest (OMC) for the 5th time. OMC, a Creative Commons Music Competition, aims to inform musicians and their fans about Creative Commons (CC). To make its 5th contest happen they have started a crowd funding campaign.

by Conor at May 14, 2015 02:35 PM

May 13, 2015

Linux Audio Users & Musicians Video Blog

John Option – Where’s my Car?

Well executed Sonic Rock from John Option. Video edited in Kdenlive.

by DJ Kotau at May 13, 2015 10:02 AM

May 12, 2015

Nothing Special

Infamous Plugins are Tiling!

I'm very very excited to say this:

1000 words


Thanks to falktx helping me understand what he told me months ago, the infamous plugins are all now 100% resizable. Which means I can use them comfortably in my beloved i3.

These just may be the first rumblings of an early release... stay tuned.

by Spencer (noreply@blogger.com) at May 12, 2015 04:34 PM

Create Digital Music » Linux

Free Version, Linux Support: 7 Cool New Things About Tracktion DAW

T6_Screen_Shot_Final

With users loyal to some great tools, how do you get attention as a different music production tool? Well, a $60 price, a solid free version, Linux support, and some cool features will definitely get you somewhere.

So don’t overlook the lesser-known options yet – if they can make you happy and get your work done, the choice is up to you.

Tracktion is one of those underdogs. Here are some reasons it’s gotten my attention.

1. You can use it for free, then spend $60 for the latest version. On Windows, Mac, and Linux, Tracktion 4 is now completely free – and it stacks up nicely against other free DAWs, as noted by Bedroom Producers Blog. Tracktion 6, with all the goodies mentioned here, is just $60. So you have a free option if you need a no-cost DAW for collaboration (or if you’re just kind of broke), or you can try it out free to see if you like the workflow before deciding whether to buy.

Oh yeah, and that $60 license includes a copy of Melodyne Essential with the ability to directly manipulate individual notes in audio. That’s less for the whole package than the $99 price of Melodyne itself, meaning you get a whole DAW with Melodyne integration for sort of “less than free.”

Analog_Synth_Screenshot

2. It has expanded Linux support. Desktop Linux may not be a wildly popular option, but on the other hand, I do know people who have built rock-solid, high-performance setups on old hardware – just as all my Mac and Windows friends complain their fairly-modern machines are starting to crawl. And choice is good. After a long public beta, Linux is now officially supported. And with Bitwig Studio, Ardour, Pianoteq, and others all running perfectly well on Linux, one thing I think developers can’t say any more is that it’s “impossible” for commercial music tools to support the OS. It’s obviously not an obvious business move, but “impossible” it ain’t.

3. Step Clips put step sequencers anywhere. I remember liking doing this sort of stuff back in the day with FL Studio – here, step sequencers show up anywhere, in any track. More evidence that you can still come up with new ideas about how a production tool will work, and fun for us dance music makers:

You can use that with different synths, too:

And add groove:

4. It makes it easy to insert hardware. The Insert Plug-in, demonstrated in this video, is new to T6. Yes, other software (like Ableton) do allow you to “add” hardware to your software rig. But Tracktion has a clever way of keeping things sample accurate and making this practical in use; watch for more.

5. It does time warping in a way that might impress you even if you aren’t impressed by time warping. Two things here: one, they licensed the brilliant-sounding Elastique Pro algorithm. Two, they’ve done a nice implementation of the UI.

6. Punch in and out quickly. This is a little thing, but for anyone doing recording it can wind up being huge. Tracktion users swear by workflow, so this is essential:

7. You might have it with your next Mackie mixer. Mackie’s ProFXv2 mixers have built-in USB and an included copy of Tracktion v6.

And all of this is in a single-window interface that feels a little as though you entered an alternate reality where Ableton grew up in the world of the Arrange window instead of Session view. But, you know – different. Bizarro. In a nice way.

Okay, so it’s time to actually try this in a project. Guess that’s my new summer rule: each week, a new track, in a new DAW. Ahem. Also on my list: PreSonus’ Studio One is actually gaining serious uptick in users, Harrison’s MixBus revision looks substantial (based on Ardour), and that’s to say nothing of old standbys like Reaper and Renoise. Cubase, SONAR, Logic, Ableton, DP, Pro Tools, and the like may be household names, but this market supports a surprising number of alternatives.

http://www.tracktion.com/

Lots more on the KVR Tracktion forum (thanks to various readers for pointing that out!)

The post Free Version, Linux Support: 7 Cool New Things About Tracktion DAW appeared first on Create Digital Music.

by Peter Kirn at May 12, 2015 04:14 PM

May 11, 2015

Libre Music Production - Articles, Tutorials and News

Giada Loop Machine 0.9.6 released

Version 0.9.6 of Giada Loop Machine has been released. Some of the improvements for this version, codename 'Flammarion engraving', are -

by Conor at May 11, 2015 08:39 PM

LMP Asks #8: An interview with Florian Bador

This month we talked to Florian Bador, FLOSS audio enthusiast and founder of music distribution website, Trust Music, that promotes open audio formats.

Hi Florian, thank you for taking the time to do this interview. Where do you live, and what do you do for a living?

I am located in Santa Monica, California.

by Conor at May 11, 2015 11:28 AM

John Option release new music video

John Option recently took to a skate park to have some fun recording a video for their new song, "Where is my car?". As with all John Option music, this new song is published under the terms of the Creative Commons License Attribution Share Alike.

by Conor at May 11, 2015 10:50 AM

There's a new, up and coming graphical EQ on the block

Robin Gareus has just tagged v0.2 of his new graphical EQ on github. Fil4.lv2 is based on DSP from Fons Adriaensen's highly regarded fil-plugin, which you may have seen in your plugin manager as '4-band parametric filter'.

Fil4.lv2 is available as an LV2 plugin but there is also a standalone JACK app.

by Conor at May 11, 2015 10:16 AM

May 09, 2015

rncbc.org

Vee One Suite 0.6.3 - A sixth beta release

[UPDATE] As a micro dot release override, the current sixth beta release is out and crunching the earlier fifth to oblivion ;)

Howdy,

The Vee One Suite of old-school software instruments, aka. the gang of three have bumped over another tiny notch: synthv1, as one polyphonic synthesizer, samplv1, a polyphonic sampler and drumkv1, as one drum-kit sampler, are now being released to the masses. Again ;)

There's no big audible changes, if any at all, yet this sixth beta release is rather a probable bug fix for drumkv1 LV2 on Ardour (v3 and v4).

Anyway, it's all gone as follows:

  • Sample file drag-and-drop support has been added to the note element list widget (drumkv1 only).
  • Main widget layout changed as much to allow sampler display and element list to expand or grow vertically as needed (samplv1 and drumkv1).
  • Sample file path mapping has been fixed for LV2 plugin state restoration, which were preventing Ardour to reload any of the saved session or preset sample files in particular (re. drumkv1 only).
  • Custom knob/dial behavior mode options are now introduced: linear and angular (aka. radial) as far to avoid abrupt changes on first mouse click (still the default behavior).
  • Fixed for some strict tests for Qt4 vs. Qt5 configure builds.

We're still available in dual form, as business as usual:

  • a pure stand-alone JACK client with JACK-session, NSM (Non Session management) and both JACK MIDI and ALSA MIDI input support;
  • a LV2 instrument plug-in.

Enough to tell, the Vee One Suite are free and open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

As always, have (lots of) fun :)

synthv1 - an old-school polyphonic synthesizer

synthv1 0.6.3 (sixth official beta) is out!

synthv1 is an old-school all-digital 4-oscillator subtractive polyphonic synthesizer with stereo fx.

LV2 URI: http://synthv1.sourceforge.net/lv2
website:
http://synthv1.sourceforge.net
downloads:
http://sourceforge.net/projects/synthv1/files

Flattr this

samplv1 - an old-school polyphonic sampler

samplv1 0.6.3 (sixth official beta) is out!

samplv1 is an old-school polyphonic sampler synthesizer with stereo fx.

LV2 URI: http://samplv1.sourceforge.net/lv2
website:
http://samplv1.sourceforge.net
downloads:
http://sourceforge.net/projects/samplv1/files

Flattr this

drumkv1 - an old-school drum-kit sampler

drumkv1 0.6.3 (sixth official beta) is out!

drumkv1 is an old-school drum-kit sampler synthesizer with stereo fx.

LV2 URI: http://drumkv1.sourceforge.net/lv2
website:
http://drumkv1.sourceforge.net
downloads:
http://sourceforge.net/projects/drumkv1/files

Flattr this

Enjoy && keep the fun ;)

by rncbc at May 09, 2015 10:30 AM

May 08, 2015

Libre Music Production - Articles, Tutorials and News

Fabla 2 - Progress update

Harry van Haaren from OpenAV has just posted an update on Fabla 2's progress along with a nice demo of some of it's capabilities. Check it out!

by Conor at May 08, 2015 07:00 PM

Create Digital Music » open-source

Watch a Hacklab Merge Science and Live Music Technology: MusicMakers

Documentary MusicMakers Hacklab at CTM Festival 2015 from CDM on Vimeo.

With computers and electricity or without it, musical performance has the potential to be expressive, powerful, immediate. Making music live in front of an audience demands spontaneous commitment. What technology can allow us to is to wire up that potential to other fields in new ways.

And that was the feeling that began 2015 for us, working in the collaborative MusicMakers Hacklab at CTM Festival in Berlin. Neuroscientists met specialists in breathing met instrumentalists.

Think the lightning bolt in the laboratory: it’s alive.

hacklab1_stefaniekluisch

Documentarian and artist Fanni Fazakas went above and beyond her participant role to deliver this terrific documentary video for a look inside that lab.

We get, in particular, a detailed look at the projects.

Working with my co-facilitator Leslie Garcia of Mexico (an alum of the hacklab and now serial organizer of her own hacklabs), we got the chance to open up the lab in some new ways for CTM this year. Biologists and planetary scientists joined the discussions. Bacteria and brainwaves and electric shocks joined the interfaces. And the results went far beyond what we could have imagined.

With collaboration from CTM Festival and Native Instruments, we were able to connect to a festival full of programming exploring body-sound relationships and make use of NI’s facilities and development teams – with input from people who design music hardware and software (and some Reaktor tips, of course).

The video tells the story, but some of the examples:

  • A bacterial interface for sound involving a wearable kobucha mask and a collaboration with industrial design
  • A Wiimote-driven yoga exercise
  • A sonification of data from the sun
  • Feedback from muscles and the bodies – even, in our first pre-natal interaction, connecting with a pregnant participant
  • A machine that sequenced Ableton Live to … physical pain?

And the list goes on.

Processed with VSCOcam with se3 preset

Processed with VSCOcam with j5 preset

Processed with VSCOcam with j6 preset

10932060_916548945046093_2112652336_n

10950585_327030614164677_1022820977_n

Processed with VSCOcam with j6 preset

Processed with VSCOcam with b1 preset

Now, I think looking from outside the process, you might see some of these as familiar ideas. But the other thing to appreciate is that this is just a first chapter – a running start for concepts and collaborations. Very often, pairings formed in hacklabs have led to new work later. I’m every bit as interested in what happens after these events as what happens during – and, indeed, the hacklab format under Leslie’s direction has already toured to Mexico City and Dresden in the intervening time, with more planned. (More on that soon.)

We also go to speak this week at re:publica, Germany’s digital culture conference, about hack-a-thons, with myself, and representatives of Berlin Music Pool, Tech Open Air, Music Tech Fest, and SoundCloud (on behalf of Music Hack Day). And that in turn came just before the second edition of MIDI Hack (where I’ll be this weekend), and yet more events in the coming weeks, plus a larger agenda for engagement evolving across Europe.

Plus, now that bio-hacking labs have met up with music hacking labs, I think nothing may be quite the same.

It’s alive, indeed.

More coverage of the event

Resident Advisor’s lavish coverage of CTM in photos includes many shots of the lab:
CTM in pictures

Dry Magazine featured the lab, including selected interviews with participants:

The future of music performance

Warsaw’s Jakub Koźniewski collaborated with Moscow’s Dmitry Morozov (aka ::vtol::) on a 3-axis interface for finger, which made an ideal technical window for the Atmel blog – yes, the chip folks behind Arduino:

Producing a sonohaptical experience with Arduino [Atmel blog]

German-language only:

Missy, a feminist magazine for young women, profiled the hacklab as part of their CTM writeup:
CTM Festival 2015: Verkörperlichte Technologie und Resonanz

Wired Germany interview lecturer (and previous hacklab workshop leader, with Imogen Heap) Kelly Snook:
Zukunft der Musik / Kelly Snooks Handschuhe bringen das Universum zum Klingen

adamjohnwilliams_stefanie

planet

participants_stefanie

liveperformance_stefanie

Full description of projects presented in performance

Not all the participants quite made it to the live stage at the end, though everyone produced something. But here are the projects seen in the film:

UN TUNE X CTM FESTIVAL
MusicMakers Hacklab 2015
Tuning Machines

MBO-D
Yoga sound tuning
Diana Combo [PT]
Fanni Fazakas [HU]
Giampaolo Costagliola [VE]
Marie Caye [FR]
Maximilian Weber [DE]

Ableton Live
Max MSP
Darwiin Mote
Nintendo Wii Mote
Emotiv Epoc

Organun Vivum
Bacterial Cellulose Interface
Paul Seidler [DE]
Aliisa Talja [FI]

Arduino
Pulsum OSC
Super Collider

Three Cycles
Solar data sonification

Muharrem Yildirim [TR]
Juan Duarte [MX]

openFrameworks
Pure Data
Arduino
Benjolin

TITOMB = Two Input Three Output Mixing Board
Muscle – machine feedback

Omer Eilam [IS]

Xth Sense
Electromagnetic transducer

Max/MSP
TENS device

Untitled
Participative performance for pregnant body
Theresa Schubert [DE]
Marco Donnarumma [IT/UK]

Xth Sense, Bioacoustic wearable sensor
Custom-made responsive garment
Surround sound

The Finger
sonohaptic + one-finger controller

::vtol:: [RU]
Jakub Kozniewski (panGenerator) [PL]

Processing
PureData
Arduino
Nord Modular
Sonohaptic
One-Finger

Sonic Minds
Real-time EEG data live dj set

MuArts collective
Francisco Marques-Teixeira [PT]
Francisco Rocha Gonçalves [PT]
Horácio Tomé Marques [PT]
Miguel García [MX]

Emotiv Epoc Action Potential
Max MSP
Ableton Live
Reaktor

We Suffer For Our Art
Electrical suffering for musical performance

Karl Pannek [DE]
Adam John Williams [UK]

Ableton Live
Reaktor
Node.js
Max MSP/Jitter
Arduino
Transcutaneous Electrical Nerve Stimulation Electromyograph

Dark Side of the Balloon
Collaborative A/V improvisation

Anastasia Vtorova [RU]
Francesco Ameglio [IT]
Sinead Meaney [IR]

Max MSP
Analog pedals
Low Frequencies

UN TUNE X CTM FESTIVAL
MusicMakers Hacklab 2015
Tuning Machines

Credits:

Animation: Zara Olsson
Music: RUMEX
Film (direction, editing, videography): Fanni Fazakas

http://www.ctm-festival.de/festival-2015/transfer/musicmakers-hacklab/

For a documentary of our previous hacklab outing:

4DSOUND Spatial Sound Hack Lab at ADE 2014 from FIBER on Vimeo.

The post Watch a Hacklab Merge Science and Live Music Technology: MusicMakers appeared first on Create Digital Music.

by Peter Kirn at May 08, 2015 06:08 PM

OpenAV

Fabla 2 – Berlin Progress (no release yet..)

Fabla 2 – Berlin Progress (no release yet..)

  Fabla2 – Berlin Progress Lots of progress made in the last few days – bugs fixed, features added and so much more. Follow development of the 2.0 release milestone on Github. So what is the current release date? “Soon”. Checkout the youtube video – it shows the current capabilities! Stay tuned, -OpenAV Read more →

by harry at May 08, 2015 04:27 PM

ardour

Another Great LibreMusicProduction article: hardware inserts

Conor McCormack just posted another amazingly good tutorial for Ardour over at Libre Music Production, this time on using hardware inserts. Read the whole thing: http://libremusicproduction.com/tutorials/how-use-outboard-gear-ardour.

by paul at May 08, 2015 02:45 AM

May 06, 2015

ardour

Help us debug GUI performance on certain video interfaces

Several users have reported that the GUI (graphical user interface) of Ardour 4.x is slow on their systems. We know that this is caused by a combination of video interfaces and the driver for the video interface, but we don't know which video interfaces or which drivers are to blame. By default Ardour 4.x tries to get the video hardware to accelerate drawing, which may or may not happen depending on the card and driver. We could use your help in trying to understand what is happening and where.

read more

by paul at May 06, 2015 08:18 PM

Create Digital Music » open-source

Watch These Videos and Make Musical iOS Apps with Pd, Free

The challenge in making tools, as in making anything else, is really the making. It’s one thing for an idea to exist in your head, another to really get down to construction. And very often great engineering means testing, means building the idea and then refining it. So prototyping is everything.

That could explain the increased passion for hacking. Whereas big development efforts are a morass of meetings, or traditional prototyping could mean elaborate distractions from testing what we really works, “hacks” work to get something usable more quickly. And that means testing the usability of an idea happens faster.

libpd, an embeddable version of Pure Data, is meant to be a tool that works both in a weekend hackathon and in a shipping product. (For some shipping products CDM helped with, check out the mominstruments site – more on these this week and next, in fact!)

And this set of video tutorials by Rafael Hernandez is the best introduction I’ve seen yet to using them. I usually actually hate sitting through video tutorials. But these are clear, concise, and give accurate advice – and they walk you through the latest version of Xcode, which is sometimes otherwise confusing.

I have no doubt you could watch these over a half hour breakfast and build a cool app hack by the end of the day.

If you don’t yet know Pd, he also has a video series on that:

There are some real gems in there, worth a browse even if you’re a Pd user. Pd is a bit deeper, though, so I’m back to also liking to read and not just watch videos – see also the pd-tutorial and flossmanuals as they cover some more sophisticated techniques.

Maybe you’ll get to do some of this hacking with us in person, if you’re in Berlin:

This week seems to be all about hacking. Tomorrow, I join re:publica, one of Europe’s premiere digital media conferences, to talk about hackathons and collaborative development. Then, this weekend, CDM and MeeBlip are supporting MIDI Hack, a weekend of music creation-focused work hosted at Ableton’s headquarters. Those events are not open to the public and MIDI Hack is full, but we’ll certainly bring some reports your way.

Finally, Monday, we join Matt Black, the co-founder of NinjaTune and Coldcut, for a conversation on the future of musical apps and some tools he’s helping bring to the world for free that make tools more collaborative, more creative, and more connected:

Synced Up: A Conversation with Matt Black (NinjaTune, Coldcut)

Matt will be showing not one but two frameworks that use libpd for sync and creative coding / creative development, too. So if you’re in Berlin and didn’t get into MIDI Hack, you can still join us Monday. And, again, since only a tiny fraction of you are here in the capital of Germany, ask questions in comments here and we’ll bring as much as we can online.

Wherever you are in the world, get the coffee brewing and limber up those fingers for soldering and coding. More to come.

Are you using libpd in your apps?

We need help updating the libpd showcase. It’s got some great apps, but we want to add more recent work:

http://libpd.cc/portfolio/showcase/

Send a description, one video link, and a couple of stills to us. You can contact us directly.

The post Watch These Videos and Make Musical iOS Apps with Pd, Free appeared first on Create Digital Music.

by Peter Kirn at May 06, 2015 11:47 AM

May 04, 2015

OpenAV

Berlin – A week of progress + Meetups!

Berlin – A week of progress + Meetups!

This week OpenAV is in Berlin! Daytime is going to be spent finishing Fabla2 – with the goal to get a 2.0 release out before the weekend.. and lots of collaborating with MOD to get ArtyFX working 100%. Linux Audio Meetup 19:30 in c-Base, 6th May – Berlin PureData Meetup 20:00 in c-Base, 7th May – Berlin Looking forward to… Read more →

by harry at May 04, 2015 04:55 PM

May 02, 2015

Libre Music Production - Articles, Tutorials and News

Vee One Suite 0.6.2 released

Rui Nuno Capela has just announced a new release of his Vee One Suite of plugins. The plugin suite is available in the LV2 plugin format as well as standalone JACK clients, with NSM support. There are 3 plugins in the suite. They are as follows -

Synthv1 - an old-school polyphonic synthesizer
Samplv1 - an old-school polyphonic sampler
Drumkv1 - an old-school drum-kit sampler

by Conor at May 02, 2015 11:38 AM

May 01, 2015

Ubuntu Studio » News

Precise and linux-lowlatency-3.2 EOL

Since 3 years has become a new standard as the support period for Ubuntu flavor LTS releases, we decided to end support for Ubuntu Studio 12.04 Precise Pangolin after 3 years. While we do that, we also end support for linux-lowlatency 3.2. The most recent update will be the last one. If you are still […]

by Kaj Ailomaa at May 01, 2015 06:17 AM

April 26, 2015

Libre Music Production - Articles, Tutorials and News

April 2015 newsletter – Interview, Tutorial and LMP features in Linux Format

Our newsletter for April is now sent to our subscribers. If you have not yet subscribed, you can do that from our start page.

You can also read the latest issue online. In it you will find:

  • LMP article features in Linux Format magazine
  • 'LMP Asks' interview with Giovanni A. Zuliani
  • Arduino tutorial
  • Lots of new software release announcements

and more!

by admin at April 26, 2015 07:54 PM

April 25, 2015

Hackaday » digital audio hacks

Audio Algorithm Detects When Your Team Scores

[François] lives in Canada, and as you might expect, he loves hockey. Since his local team (the Habs) is in the playoffs, he decided to make an awesome setup for his living room that puts on a light show whenever his team scores a goal. This would be simple if there was a nice API to notify him whenever a goal is scored, but he couldn’t find anything of the sort. Instead, he designed a machine-learning algorithm that detects when his home team scores by listening to his TV’s audio feed.

goal[François] started off by listening to the audio of some recorded games. Whenever a goal is scored, the commentator yells out and the goal horn is sounded. This makes it pretty obvious to the listener that a goal has been scored, but detecting it with a computer is a bit harder. [François] also wanted to detect when his home team scored a goal, but not when the opposing team scored, making the problem even more complicated!

Since the commentator’s yell and the goal horn don’t sound exactly the same for each goal, [François] decided to write an algorithm that identifies and learns from patterns in the audio. If a home team goal is detected, he sends commands to some Phillips Hue bulbs that flash his team’s colors. His algorithm tries its best to avoid false positives when the opposing team scores, and in practice it successfully identified 75% of home team goals with 0 false positives—not bad! Be sure to check out the setup in action after the break.


Filed under: digital audio hacks

by Ethan Zonca at April 25, 2015 05:01 AM

April 24, 2015

Hackaday » digital audio hacks

Logic Noise: Sequencing in Silicon

In this session of Logic Noise, we’ll combine a bunch of the modules we’ve made so far into an autonomous machine noise box. OK, at least we’ll start to sequence some of these sounds.

A sequencer is at the heart of any drum box and the centerpiece of any “serious” modular synthesizer. Why? Because you just can’t tweak all those knobs and play notes and dance around at the same time. Or at least we can’t. So you gotta automate. Previously we did it with switches. This time we do it with logic pulses.

The 4017 Decade Counter

The featured chip this session, the one that gets it all done, is the 4017 Decade Counter. It’s a strange chip, left over from the days when people wanted to count things using IC logic chips instead of just running the input into a microcontroller. But therein lies its charm and usability.

4017_pinout

At its simplest, you input a clock signal and one of ten different outputs (Q0-Q9) is set high while the others are all low. On the next rising edge of the clock, the next output is set high and the previous is set low. This goes on until the count loops around the end.

One feature that makes the 4017 super useful is the reset pin. When a high voltage is set on reset, the first output (Q0) is set high and all the others zeroed out — the chip starts counting at zero again. The cool trick here is that you can connect the reset pin up to one of the Q outputs and the counter will automatically reset once it reaches that output. If reset is connected to Q2, for example, the count will go Q0 then Q1, and then immediately reset back to Q0 again: Q0, Q1, Q0, Q1… If you wanted an octal (divide by eight) counter, you just hook the reset pin up to Q8.

4017_timing

Eight steps is pretty standard (boring?) and you can get nice groove patterns by selecting odd sequence lengths. But if you want your standard drum machine, hooking the reset up to Q8 is the way to go.

Looking at the timing diagram, notice that the reset pin can also be used asynchronously. That is, the chip will reset as soon as the reset line goes high — it doesn’t wait to finish the current clock cycle. Here, for instance, we hit reset while clock step five (on Q4) was still active.

Async reset means that your reset source can come from outside the 4017 chip and its counter. For instance, you could connect the reset line through a pushbutton to VCC and reset the sequence at will. (If you do this, consider a pulldown resistor on the reset line to keep the voltage level well-defined when the button isn’t pressed.)

There are two more pins left over. The “Carry Out” pin is low when Q0-Q4 is high and high when Q5-Q9 are high. When the chip is counting up to ten, this produces a nice square wave with a cycle once for every ten counts. As the name suggests, this can be fed into another 4017’s clock input and you’ll have a count-to-100 device. Chain up the next carry out to a third 4017, and you can count up to 1,000 clock pulses. (Tie all the reset lines together to zero them at once, if you’re actually counting.) Carry out is less useful for us, but we’ll play around with it a tiny bit next session anyway.

Finally, there’s the “Inhibit” pin. In most implementations of the 4017 chips, setting inhibit high makes the chip ignore the incoming clock pulses. Some manufacturers’ chips have some slightly more clever logic where the inhibit line can be dual-purposed to count up for high-to-low transitions if the “clock” line is held low and the “inhibit” line toggled. We’ll not be using this feature, so just remember to tie the inhibit line (pin 13) to ground so that it’s not glitching around.

Back to what matters here. We’ve got a chip that’ll put out logic voltage signals on one pin after the other, clockable and resettable. That’s the heart of a simple sequencer. Most of what we’ll be doing this session is making these voltage steps play well with our quick-and-dirty CMOS logic synth modules.

The Basic Sequencer

The sequencer that we’ll be building up this session is nothing more than a 4017 clocked by a 40106-based oscillator that’s running at a “tempo” frequency rather than at audio rates. Indeed, you could stop there. But for the low, low price of another 4017 logic chip, some LEDs and resistors, you can have something deluxe.

Our version of this simple sequencer is going to be built from two 4017s with the clock and reset lines in common between them. This means that the two 4017’s will run in lockstep with one another at all times. We can use one 4017 for driving our synth devices and the other for driving ten status LEDs, without having to worry about the LEDs pulling the output voltages low if they draw too much current.

dual_4017s.sch

 

Of course, if you don’t want the LEDs, you can entirely omit the second 4017 from the circuit. Or if you’re feeling lucky, you could hang the LEDs off of the signal outputs directly. But since we’ll be already demanding a little bit more current from the 4017s than they’re designed for, we think it’s easily worth the extra chip for insurance and convenience. Plus, it makes for a lot cleaner layout on the breadboard.

Gating Oscillators with the 4017 Sequencer

The first thing you’d probably like to do with the sequencer is to play a bunch of notes. The quick-and-dirtiest way to do that with our current setup is to construct one oscillator per note you’d like to play and then have that oscillator sound only when the corresponding “Q” output of the 4017 is set high. That should be easy enough, and it is.

If you remember back from our first session, we used a diode to create a hard-sync oscillator sound by gating one oscillator with another. The oscillators all work by charging up a capacitor through the feedback resistor, recall, so if you can drain enough current out to prevent the capacitor from charging up, you can silence the oscillator.

4017_and_oscillators.sch

 

First look at the single audio oscillator on the top right, connected to the 4017 through input “A”. When the corresponding 4017 output is low, whatever current passes through the feedback resistor (RV2) to charge up the capacitor (C2) will get sucked out the diode (D1) into the 4017, and the oscillator won’t oscillate. When the 4017’s output is high, the diode blocks and the oscillator is free to do its thing. Easy and done.

The Diode OR Gate

But what if we want one note to fire multiple times? Here’s an interfacing trick that can be handy, called “Mickey-mouse” logic or less imaginatively, diode logic. The idea is that you can set up the desired default logic state with a pull-up or pull-down resistor, and then override it with signals coming in through a bunch of diodes. To add more inputs to the OR all you have to do is add more diodes to the circuit, which makes it useful when you need something odd like a seven-way OR function.

diode_or.schConsider inputs B, C, and D in this snippet from our full schematic. When none of B, C, or D are high voltage, none of the diodes (D2-D4) will be conducting, and the pulldown resistor (R3) will set the voltage going into diode D5 low, pulling current out of the oscillator and stopping it from working. When any of B, C, or D are logic high the 4017 will pass current through the corresponding diode and out to the junction with the resistor. When this point is high, the diode D5 won’t conduct and the oscillator runs, just as in the single-step version above it.

Why the diodes D2-D4? If one stage of the 4017 is high, say B, the other two must be low. Without the diodes in the circuit, we’d be shorting the two pins of the 4017 together, and all bets are off about the voltage at the junction labelled “Diode OR”. These diodes keep one pin of the 4017 from fighting with another.

Picking the value for the resistor in the diode OR circuit is a little critical. It needs have a low enough resistance that it can pull down the oscillator circuit. So R3 needs to be less than the value we’ve got dialed in on the variable resistor that tunes the oscillator (RV3). But R3 needs to be large enough that when the 4017 pushes its high voltage through the output diodes, the voltage at the junction rises enough to block diode D5.

The 4017 is only specified for an input or output drive current of three milliamps with a 10V supply, and only one milliamp at 5V, which means that we should use a pulldown resistor no smaller than 2.2K as a pulldown if we want a voltage higher than VCC/2 at the diode OR’s junction. Use something even larger helps reduce the demand for current on the 4017.

Indeed, it’s this requirement for the 4017 to source a bunch of current that motivates using a second 4017 to handle the LEDs. Our 4017 datasheets only specifies three milliamps of output drive or sink current connected to a 10V supply. (And this drops to one milliamp at 5V VCC.) With 1K resistors on the LEDs, we’re probably already drawing five to ten milliamps — way more than the chip is specified for. Adding more load to a single 4017 chip to drive the diode OR, or even more outputs, is asking for trouble. You could imagine buffering each output of the 4017, but at some point it’s just easier to toss another 4017 into the design.

Anyway, technical details aside, that handles controlling individual oscillators from the sequencer. And we’ve seen how to run one oscillator from multiple sequencer stages. With six oscillator per 40106 chip, you should be able to make reasonable melodies with a minimum of parts.

Gate-to-Trigger Pulse Circuit

Now it’s time to drive our percussion. If you remember the two-diode VCA from our Cowbell session, we actually built out a “gate to trigger” converter. In modular synth lingo, a “gate” signal is a logic signal that stays high as long as (for instance) a key is pressed, and then drops back down low instantly when it’s released. The 4017’s individual outputs look a lot like a gate signal — each output is high during its complete step and only during its step. Or cymbal’s decaying amplitude circuit, on the other hand, needed a quick pulse at the start of the step, called a “trigger” signal.

diode_vca_interface.sch

At the heart of the gate-to-trigger circuit is a capacitor (C1). Changes in voltage on one side of the capacitor let a bit of current through until the capacitor has charged up enough to resist further current. This turns the leading and trailing edges of our gate signal into positive and negative spike pulses.

We choose to only pass the positive voltage spike by using a diode (D5). The remaining problem, that we glossed over in the Cowbell session, is that the negative spike doesn’t pass through D5. In fact, without the diode pointing up from ground (D7) in the circuit, the right-hand plate of the capacitor C1 would get stuck at a negative voltage with respect to ground. Additional positive pulses sent through from the 4017 on the left-hand side would maybe raise the voltage up as high as zero volts, but certainly wouldn’t be enough to pass through diode D5 and make a sound. In short, you’d have one hit and then you’d never hear it again.

The diode up from ground (D7) prevents this situation by charging the right-hand side of C1 up to at least a diode-drop less than 0V after each negative spike. This makes the gate to pulse circuit work a little bit like a pump; when charge is pushed through the capacitor from the 4017 side, it passes through D5, and when charge is pulled back the other way it is sourced through the D7 “check valve” from ground.

If you buy that analogy, the rest of the cymbals interface circuit should be clear. A diode OR on the left-hand side allows multiple cymbal hits. Again, the choice of the pulldown resistor is important, but here there is a lot less demand for it to be tiny. The resistor R2 is only responsible for discharging the left-hand side of capacitor C1 between hits. If you’re running fast sequences, experiment with lower values.

Variable Trigger Pulse for the Twin-T Drum

Again, for the bass drum sound, we’re going to need a gate-to-trigger circuit, and aside from using a smaller capacitor than the one above, it’s just the same. But one thing we really like about twin-t drum circuits is the volume dynamics across different input voltage spikes. That is, if you hit the twin-t with a small voltage spike it’s quiet, and if you hit it with a large voltage spike it gets loud. Adding in this kind of variation into your drum patterns make them sound less robotic, so it’s worth thinking about and spending a couple of resistors on. And indeed, that’s all that we’ll need.

4069_drums_interface.sch

This circuit combines the outputs from the 4017 in an effective voltage divider. Since only one of the 4017’s outputs will be high at any given time, we can figure this out pretty quickly. When A goes high, there’s a voltage divider to ground formed by the 22K resistor R1 and the two 100K resistors in parallel, R2 and R3, for 50K. The voltage output at the junction of the resistors is 22 / 72 * VCC, or about 2.75V with a 9V supply.

When either of B or C is high, the effective resistor to ground has the parallel resistance of 22K and 100K resistors, or 18K. The resulting voltage spike has a peak around 1.4V, so it sounds a bit quieter. All of these pulses have to pass through the diode D1 as well, so they’re probably attenuated even further. You can just play around with the values until they sound right.

Output Mixer and Final Details

Finally, all of the various sound sources are combined simply by passing them through 100K resistors and connecting them together at the amplifier’s input. This simple summing “mixer” is quick and dirty and works just fine. If you want one sound source quieter or louder, you can change these resistor values within reason: how much you can get away with depends on the input impedance of the amplifier you’ve got it hooked into. Factors of two are probably OK. Experiment.

A more engineered solution involves removing the DC offset from each sound source and summing them (probably with variable gain) using operational amplifiers or similar. That’s a great idea, but that’s also another project in itself.

Inspiration

The classic 4017-based sequencer is the “Baby 10“.  The original was intended to drive voltage-controlled analog gear, so it put out an adjustable voltage with each step in addition to the on-off gate signals that we’re using. If you’ve got anything that’ll take control voltages, it’s easily worth the ten potentiometers to build out a full Baby 10. You’ll find tons of links on the web.

Next Session

This session was all about sequencing for control. Next session we’ll go back to crazy. We’ll continue to use the 4017, although next time in “unexpected” ways. But the main attraction is going to be a shift register, specifically the 4015. Mayhem ensues!


Filed under: digital audio hacks, Featured, musical hacks, slider

by Elliot Williams at April 24, 2015 02:00 PM

Libre Music Production - Articles, Tutorials and News

Can't find Ardroid on the play store? Time to check out F-Droid

It was recently brought to our attention that Ardroid, a remote control app for Ardour, no longer appears to be available at Google's play store. All is not lost however. If you haven't heard of F-Droid, maybe it's now time to check it out. F-Droid is an alternative software repository for your Android device that contains FLOSS software.

by Conor at April 24, 2015 07:07 AM

April 23, 2015

Ubuntu Studio » News

Ubuntu Studio 15.04 Vivid Vervet released!

Another short term release is out. Not much is new, but some of the most obvious changes are: new meta package: ubuntustudio-audio-core (includes all the core stuff for an audio oriented installation) XFCE 4.12 If you want to know more, please have a look at our release notes. You can find the downloads at our […]

by Kaj Ailomaa at April 23, 2015 07:31 PM

April 22, 2015

Create Digital Music » Linux

Now Google Chrome Browser Does MIDI

It’s 32 years old. It’s supported by keyboards and electronic wind instruments and lederhosen. And now you can add your browser to the list. MIDI will never die.

Yes, as of more recent beta and stable builds, Google’s Chrome browser has built-in support for hardware MIDI. Plug in a MIDI controller, and you can play – well, this Web Audio MIDI Synthesizer, anyway:

https://webaudiodemos.appspot.com/midi-synth/index.html

Chris Wilso is the author, and describes it thusly:

This application is a analog synthesizer simulation built on the Web Audio API. It is very loosely based on the architecture of a Moog Prodigy synthesizer, although this is a polyphonic synthesizer, and it lacks the oscillator sync and glide effects of the Prodigy. (AKA: this is not intended to be a replication of the Prodigy, so pleased don’t tell me how crappy a reproduction it is! :)

This uses my Web MIDI Polyfill to add MIDI support via the Web MIDI API – in fact, I partly wrote this as a test case for the polyfill and the MIDI API itself, so if you have a MIDI keyboard attached, check it out. The polyfill uses Java to access the MIDI device, so if you’re wondering why Java is loading, that’s why. It may take a few seconds for MIDI to become active – the library takes a while to load – but when the ring turns gray (instead of blue), it’s ready. If you have a native implementation of the Web MIDI API in your browser, the polyfill shouldn’t load – at the time of this writing, Chrome Canary and Chrome Stable (33) have the only such implementation. The Web MIDI flag must also be enabled via chrome://flags/#enable-web-midi

So, why would you want such a thing?

Well, Google has their Chrome operating system to worry about, for one. And while Chromebooks haven’t exactly taken the world by storm, they are picking up a tidy selection of sales.

We’ve heard promises of browser-based music for years, of course, and even had some rather viable options (first in Flash, now in Web tools). The promises are old enough that you might rightfully be more than a little dubious about their future. Standalone software performs better, it seems, and the business model that supports it has remained more of an incentive to developers than the unknown world of a browser tab.

That said, I could still foresee someone devising an application we haven’t yet imagined. For instance, if this were more widely deployed, maybe plugging a MIDI keyboard into an app on Facebook isn’t out of the question.

I also still imagine that browser-based music apps could be powerful for education and communication in ways standalone apps might not – and then, you might be willing to settle for slightly less-awesome performance.

In the meantime, this doesn’t matter so much. Developers wanting to toy around with this now can, and there’s code to play with, too. So that’s one of the nice things about the Web: without making any significant investment, our “what if?” scenarios don’t have to be limited to me rambling on and speculating. You can actually try it for yourself.

https://github.com/cwilso/midi-synth

For more reflections on this and Web audio in general, here’s a great opinion piece:

Latest Google Chrome beta gets MIDI support

The post Now Google Chrome Browser Does MIDI appeared first on Create Digital Music.

by Peter Kirn at April 22, 2015 04:36 PM

April 21, 2015

Create Digital Music » Linux

Here’s Why the New Version of the Free Ardour 4 DAW is Great

ardour_retina_no_plugs2

It’s easy to make an argument to any cash-strapped producer that a free DAW is good news. And it’s easy to convince a free and open source software advocate that a free-as-in-freedom DAW is a good thing.

But that’s not enough. If we’re going to talk about software, let’s make sure it’s worth using.

Ardour, the free and open source DAW, has always been powerful. But it hasn’t always been seamless to use – especially outside of Linux. Ardour 1 and Ardour 2 were incredible feats of engineering, and some people used them to make music, but let’s be honest – outside developers and Linux nuts, you wouldn’t find a whole lot of users. Then Ardour 3 came along and added MIDI – but it still wasn’t quite ready for prime time.

Ardour 4 is something different. It looks better – maybe not pretty, exactly, but easier on the eyes and more comfortable to use. It works better – loads of new functionality changes make it a more well-rounded tool.

But most relevant to most people, you can now install it on Windows and OS X and have it behave like you’d expect a DAW to behave.

Let’s go over the big differences:

big_editor4

It’s got a new UI. A bunch of graphics stuff has been reworked from the ground-up. There are more vectors, and everything is more modern. (vectors!) It’s also easier to switch color schemes.

It’s now on Windows and OS X. It’s now on both operating systems. On the Mac, it’s moved from unofficial to official and supported status. It’s also more in line with what you’d expect from a Mac app: Audio Units work more smoothly on the Mac, and it looks actually really slick on a Retina Display. On Windows, you can get unsupported nightly builds – in other words, Windows is where the Mac was until recently. But on both, more native plug-in support and more flexibility with audio engines means you don’t have to feel like you’re running a Linux app on your OS of choice.

You can use any audio engine. Yes, it works with the powerful JACK, but now also ALSA (Linux), ASIO (Windows), and Core Audio (OS X). Also, misbehaving plug-ins are less likely to cause crashes.

It does VSTs.. With Windows support, you also get native VST support – and VST support is better on both Linux and Windows (Linux also has some nice plug-ins that use the VST format).

It’s powerful for MIDI editing now. MIDI bounce, mix MIDI and audio data flow (as you might for a soft synth), edit modelessly, and make transformations more easily, among lots of other details.

Ripple edits. (Move or delete and other stuff moves to fit – video editors know what I’m talking about.) Why don’t all DAWs have this again?

More Control. QCon, original Mackie Control devices, AKAI MPK61, etc.

Less Memory. 80% memory consumption reduction headlines the other performance improvements here.

full_mixer4

There are a lot of other tweaks and improvements, too, even down to SoundCloud export.

Editing in Ardour for traditional tasks can be blindingly fast. You can focus more easily under the mouse, for instance, or quickly split regions. (You can still use the ‘s’ key for the latter, but now a mouse tool also accomplishes the task.) This menu sort of embodies what I mean:

You can easily shift the focus of while you're editing - hugely quick.

You can easily shift the focus of while you’re editing – hugely quick.

You probably have some sort of DAW at this point for some of your work. What you might not have is a DAW that works comfortably and reliably on any machine, including Linux, one that you can share with friends without worrying about who bought what, one go-to tool for quick editing and tracking when the others fail. And with these improvements, Ardour could be that DAW.

You can try it out for free – demos are free everywhere, and you can build from source. Or you can pay as little as US$1 to download a ready-to-run version.

In fact, the growing success of Ardour shows some vastly improved numbers for the voluntary subscription/payment model. For just a couple of bucks a month, you can really have some impact on Ardour’s development and earn yourself access to support. It seems like a great means of funding the project – that is, now that Ardour is picking up some steam.

I think it’s worth a few bucks and keeping around your drive, even if you have another DAW.

Also, the brilliant Harrison Mixbus 3 will in the next version use the Ardour 4 base.

Check out Ardour, download a free demo, or pay to get a pre-built version – starting at just a buck.

Ardour.org
What’s New
Download/Buy

And for the specific situation on Windows (we’d love some feedback for people testing this):
http://ardour.org/windows.html

The post Here’s Why the New Version of the Free Ardour 4 DAW is Great appeared first on Create Digital Music.

by Peter Kirn at April 21, 2015 02:55 PM

Scores of Beauty

„Der Rosenkavalier“ – Chamber Version with Berlin Philharmonic Orchestra

This post is partly an advertisment and partly a success story of creating Beautiful Scores with LilyPond which I had the opportunity to experience recently.

2015, April 26th, 3pm and 5pm we will play a children’s version of Richard Strauss’ „Der Rosenkavalier“ in the Philharmonie of Berlin, commissioned and played by the Berlin Philharmonic Orchestra. I made the arrangement, the engraving, and I will conduct these performances.

Since 2013, the Berlin Philharmonic Orchestra is the orchestra in residence for the Easter Festival of Baden-Baden. Besides a large opera production in the biggest opera hall of Germany, there are numerous symphonic concerts, chamber music concerts, a chamber opera in the historical Theatre of Baden-Baden and a children’s version of the main opera. This is a cooperation between the Festspielhaus Baden-Baden, the Berlin Philharmonic Orchestra and the Academy „Musiktheater heute“ of the Deutsche Bank Foundation.

In 2015 the main opera was „Der Rosenkavalier“ (1911) by Richard Strauss, and so the children’s opera became „Der kleine Rosenkavalier“ (The little Knight of the Rose), a one-hour version for children aged 5 to 10 – and the first approved children’s arrangement of the opera. Last summer I was appointed with the arrangement of the music for 13 musicians and 4 singers. We chose the excerpts with director Hersilie Ewald and the dramatic adviser Sophie Borchmeyer. We planned a total duration of one hour, so that we only kept 40 minutes of the 3,45 hours opera – but the most beautiful ones!

The original orchestra of Strauss has over 100 musicians and it has to be reduced to… 13!
The instrumentation of our version is:

  • 1 flute (piccolo)
  • 1 oboe (english horn)
  • 1 clarinet
  • 1 bassoon (contrabassoon)
  • 1 french horn
  • 1 harp
  • 1 harmonium / celesta
  • 1 percussion player
  • 2 violins
  • 1 viola
  • 1 violoncello
  • 1 contrabass

This chamber orchestra is very near of the one Schoenberg used for his version of „Das Lied von der Erde“ of Gustav Mahler – except I didn’t use the piano.

I tried to preserve the orchestral colors as much as possible, and almost all the wind soli remained unchanged. The harmonium has a very useful filling function. The bassoon plays often melodically in a high register, the french horn must play both deep and high and generally the orchestra parts are slightly more difficult than the original – which is already a challenge for every orchestra musician!

I first wrote the whole score by hand. In a second work phase I copied it in LilyPond. I used TextMate as my editor because of the possibility of using shortcuts for highly user-settable commands I use often.
The LilyPond input was quite quick: on good days I could enter 20 pages of orchestral music. LilyPond still has some bugs… For example the standard setting doesn’t avoid collision of accidentals and slurs:
Walzer with collision
which should be corrected to
Capture d’écran 2015-04-20 à 16.17.58
I looked for examples and rules in the reference engraving books but I couldn’t find precise instructions… If someone has an idea?

And I got mad with the annoying issue 3287: when a clef change occurs at the end of a staff, the slurs in others staves are displayed wrong, while ties aren’t!

These are all little details but I got most of them corrected – some others not. With a lot of work and constant check up of Gould’s “Behind bars”, I produced nice scores and received very positive feedback from the musicians.

I would be very happy to see many LilyPond users in the hall to celebrate a glorious milestone in the LilyPond history: one of the best orchestras in the world playing with LilyPond scores – and enjoying it a lot!

More details and tickets

by Aurélien Bello at April 21, 2015 02:33 PM

April 20, 2015

Scores of Beauty

“Defining a Music Function”

It’s been about a year since I started a category with Scheme tutorials, and back then I declared them as a “documentation of my own thorny learning path”. By now I’ve experienced a significant boost in Scheme “fluency” which was triggered by (and at the same time enabled) a number of projects and enhancements, for example the ScholarLY package and the jump into a fundamental redesign of openLilyLib. I thought it would be a good idea to pick up the tradition of these tutorials before I forget too much about the difficulties of finding my way around LilyPond’s Scheme. This is of course not a carefully crafted “curriculum” but it will always be a random collection of (hopefully) useful snippets of information, each one written with the goal of explaining a single topic in more depth and at the same time more casually than the LilyPond reference can do.

Today I’m writing a tutorial that I would have needed a year ago ;-) about one thing that always vaguely confused me. I usually managed to just get around it by either routinely “doing it as always” or by getting some ready-to-use code snippets from a friendly soul on lilypond-user. This is the topic of defining music-/scheme- and void-functions in Scheme. I will analyze a music function I introduced in last years’ posts and explain what is going on there. Understanding this gave me surprising insights, and I think knowing this kind of stuff is really helpful when trying to get more familiar with using Scheme in LilyPond.

So this is the simple music function we’re going to dissect, it takes a color? argument and applies that color to the next note’s note head and stem:

colorNote =
#(define-music-function (parser location my-color)
   (color?)
   #{
     \once \override NoteHead.color = #my-color
     \once \override Stem.color = #my-color
   #})

{
  c' \colorNote #red d' c'
}

(See the other post for the output of that example.)

This is something I knew how to put together for quite some time, but I’ve always wondered about a few things here (just managed to suppress the questions, because it did work), particularly the relation between the colorNote = and the #(define-music-function part and the parser location arguments. But to be honest, I wasn’t really clear about the part returning the “music expression” either.

In the current post I will go into quite some detail about the declaration/definition of the music function and the topic of “return values”. However, I’ll skip the third issue because that’s somewhat unrelated to the other two and because the post is already quite long without it.

Defining the Music Function

Let’s start with looking at it from the perspective of the “user” or “caller”. colorNote is a “music function” that returns a “music expression”. This is the part enclosed in the #{ #} brackets, containing two overrides (yes, overrides are also “music”) and applying the #my-color argument passed into the function. So when writing \colorNote #red it’s quite obvious that I call the function colorNote, passing it the argument #red.

But the syntax of how this “function” is defined somehow always startled me, and I’m sure there are many others who could write such a function too, without really knowing what they are doing. Let’s have a look at a comparable function definition in Python (for those who know Python):

def colorNote(parser location color):
    return some_music_expression

Here the syntax is clear that we are defining colorNote to be a function object, taking some arguments and returning a value. When we use that function later in the code the program execution will jump right into the body of that function definition. But what do we actually do when “defining a music function” in LilyPond?

From the LilyPond documentation (and last year’s posts) we learn that the following expressions are equivalent:

myVariable = 5

#(define myVariable 5)

Both define a variable myVariable and set its value to the integer number 5. Or, expressed the other way round, they take the value of 5 and bind it to the name myVariable. Later in the program (or the LilyPond file) one can refer to this name and get the value back.

We can rewrite the definition using the #(define syntax like this:

#(define colorNote
   (define-music-function (parser location my-color)
     (color?)
     ; ...

So what is the value we are binding to the name colorNote in our example?

Intuitively I would expect that we bind a function’s implementation to the name colorNote, similar to what the Python example seems to do. But here we don’t seem to assign a function or function body but define-music-function instead. If you start thinking about it this seems very strange. Fortunately you can continue thinking about it and it becomes clear, so stay tuned…

Maybe you notice a small but important difference to the above definition of myVariable: define-music-function is enclosed in parentheses, while 5 was not. Parens (usually) indicate we are calling a procedure in Scheme, and this call evaluates to something. Whenever you want to use a value you can instead call a procedure, and the value this procedure evaluates to is then used as your value. (You may want to read this paragraph again… or consider the following mathematical examples. In Scheme (+ 3 2) evaluates to 5, (- 3 2) evaluates to 1, and (+ 3 (+ 1 1)) evaluates to (+ 3 2) which then evaluates to 5.)

So what we really do with our music function is call define-music-function which evaluates to a “music function” and bind this result to the name colorNote. Later in the LilyPond file when we call \colorNote we do not execute the code after \colorNote = (which is what would happen in the Python example) but instead we call the music function that has been created when \colorNote has been initially parsed. (For a more detailed and technical discussion you may read the Wikipedia article about “first class functions”).

define-music-function <argument-list> <argument-predicates> <body> itself takes three arguments, each enclosed in its own parenthesis (here the parens are used for grouping items to a list and not for a procedure call):

  • the list of argument names:
    (parser location my-color)
  • a list of argument predicates (types that the arguments have to have)
    (color?)
  • the actual implementation body

my-color is an arbitrary name that has been chosen for an argument. It lets you access the value that has been passed into the music function at that position. Note that this is the only argument that the user has to supply when calling the music function, parser and location are passed implicitly. According to the manual parser location simply has to be copied literally, which is also confusing – but we won’t go into this detail today.

color? is the type of the (single) value that can be passed to the function, so you can’t for example write \colorNote #'thisIsNotAColor (which would pass a Scheme symbol to the function).

Side note: You also can define music functions that don’t have such arguments, so the first element in define-music-function would be (parser location). It has always startled me why I’d have to add () in such cases, but now this becomes clear: define-music-function expects a list of argument predicates as its second argument, and if there are no arguments to be type-checked then this second argument is still expected, and an empty list has to be passed as the <argument-predicates>.

The “Return Value” – Music-, Scheme- and Void Functions

Digression: “Procedures” and “Functions”

Before going into the topic of the different function types I have to dwell on a certain fuzziness in terminology: procedures and functions. When reviewing this post I realized that I wasn’t completely clear about the distinction and used them interchangeably. My request on the lilypond-user mailing list raised a discussion showing that it actually isn’t a trivial topic. So while in the end it is more or less neglectable there are things you may want to digest in order not to get confused when people use these terms in the LilyPond/Scheme context.

Some programming languages make a distinction between procedures and functions, some don’t. If a language distinguishes, it is mostly the question of a return value: functions return a value, procedures don’t. This means that while both are separate blocks of code that can be called from within a program, functions produce a value that can be worked with while procedures just “do” something which doesn’t directly affect the calling code.

Other languages don’t make a distinction and call both types procedures or functions and usually have a syntactic way to clarify the behaviour. However, it’s quite common that people distinguish although their programming language doesn’t. If you notice this just try to ignore that and don’t be confused.

The implementation of the Scheme programming language that is used by LilyPond is Guile 1.8. In this basically everything is considered a procedure, regardless of having a return value or not. Take the following expression:

(car '(1 2 3 4 5))

This expression is a procedure call, namely the call to the procedure car. The list '(1 2 3 4 5) is passed as the argument to car, which evaluates to 1, the first element of the list. So the “return value” that is then used in the program is 1. Other procedures, for example (display '(1 2 3 4 5)) do not evaluate to anything, so the “value” in the place of the procedure call is <unspecified>.

Both are called “procedure” in Guile’s terminology although one returns a value and the other does not. However, you will often encounter the naming convention of calling the “returning” versions “function”. This is actually against the official naming convention of the Scheme dialect that LilyPond uses, but it is quite common and doesn’t pose a real-world problem. And – as far as I can see – this is also true for the terms “music function”, “scheme function” and “void function”.


OK, let’s get back on track and consider the “return value” of our music function. Above I wrote that colorNote returns a music expression containing two overrides. But what does that actually mean?

The body of a procedure in Scheme is a sequence of expressions, and each expression can be either a value or a procedure call. The value of the last expression in the body is the value the whole function evaluates to – or, more colloquially, is the return value of the function. In the case of \colorNote this last expression is not a Scheme expression but a LilyPond music expression, as indicated by the #{ #}. From Scheme’s perspective this is a single value (of type ly:music?), but from LilyPond’s (or rather a user’s) perspective this music expression can also be a grouped sequence of music elements – in our example we have two consecutive overrides.

To conclude we can say that a “music function” is a procedure whose last expression evaluates to LilyPond-music. It can be called everywhere that you can write a music expression in LilyPond – just like in our initial example at the top of this post.

Now, what are scheme- and void-functions then?

The whole subject of defining these functions/procedures is identical to the definition and calling of music functions, the only (but crucial) difference is the return value. A procedure defined using define-scheme-function can return any valid Scheme value, and it can be used anywhere the respective Scheme value can be used. The following example takes a string as its argument and returns sort of a palindrome version (just for the sake of the example). The type of the return value is string?, and this can for example be used to set a header field.

addPalindrome =
#(define-scheme-function (parser location my-string)
     (string?)
     (ly:message "We will add the reverse of the string to itself")
     (string-append my-string (string-reverse my-string))
     )

\header {
  title = \addPalindrome "OT"
}

{
  c' 
}

The “body” of this procedure is a sequence of two expressions. The first one (ly:message prints something to the console output but doesn’t evaluate to a value, the second is the call to string-append, which is a procedure call that evaluates to a string.

Side note 1: Here again you can see an example of nested procedure calls and their evaluations: string-append here takes two arguments, the first being a value (namely the argument my-string), while the second argument is again a procedure call. The operations that Scheme actually performs one after another are:

(string-append my-string (string-reverse my-string))
(string-append my-string (string-reverse "OT"))
(string-append my-string "TO")
(string-append "OT" "TO")
"OTTO"

So the nested expression in the first line of this example eventually evaluates to “OTTO”. And as this is the last expression in the procedure body its value will be the return value of the procedure as a whole, which in this example is used as the title of the score.

Side note 2: You can see that there is a single closing parenthesis on the last line of the procedure. This matches the opening paren in #(define-scheme-function. Scheme’s coding guidelines suggest not to place parens on their own lines but rather concatenate them at the end of previous lines. As you can already see in these simple examples nesting procedure calls can quickly build up, so it’s not uncommon to encounter Scheme procedures with, say, ten closing parens in the last line. However, I laid it out like this to explicitly show that each line in the example is one expression. Temporarily reformatting is a very useful tool for debugging procedures or to understand the structure of existing procedures you are looking at. Don’t hesitate to insert line breaks and make use of your editor’s assistance to re-indent the code as this will make things much clearer. Once everything is ready it’s advisable to re-compress the procedure again, even if you are used to other layouts that are common in other programming languages.

Probably you can by now guess what a void-function is – basically the same as the other two, but without a return value. So you will want to use define-void-function when you want the procedure to actually do something (also known as “side effects”) but don’t need any return value. The following example will print out a message to the console:

displayLocation =
#(define-void-function (parser location)
     ()
     (ly:input-message location "This was called from a 'void-function'")
     )
     
{
  c'
  \displayLocation
}

There is just one expression in the function body, printing a message. In the case of define-void-function it doesn’t matter if this (respectively the last) expression evaluates to something or not, the function won’t return any value at all. This also has the effect that you can actually call void functions from anywhere. The parser won’t try to use them as a value but will simply execute its code. So the following example is equally valid and working.

displayLocation =
#(define-void-function (parser location)()
   (ly:input-message location "This was called from a 'void-function'"))

\displayLocation

I hope this post helped you understanding a few basic things about how music, scheme, and void functions work and how they are integrated in LilyPond documents. This is only a tiny start, but understanding these concepts thoroughly definitely helps with proceeding to more complex and more powerful functions. As a final “assignment” I’ll leave it to you to figure out what the location does in the last example, how it is used and how its value actually got into the function.

by Urs Liska at April 20, 2015 12:54 PM

April 19, 2015

ardour

About Subscriptions

There are a number of recurrent questions about subscriptions to Ardour that come up. This article tries to collect them all and provide a coherent set of answers.

read more

by paul at April 19, 2015 02:26 PM

April 18, 2015

Libre Music Production - Articles, Tutorials and News

Ardour 4 is released!

Ardour has just seen a new major release, version 4. Since the last release, there have been over 4,000 commits.

Ardour has seen a lot of infrastructure work and now uses a new graphics engine (cairo) for the editor window. You can now also use other audio backends other than JACK, for example you could use ALSA, although JACK is still the recommended backend.

by Conor at April 18, 2015 11:08 PM

ardour

Ardour 4.0 released

The Ardour project is pleased to announce the release of Ardour 4.0. This release brings many technical improvements, as well as new features and over a thousand bug fixes.

The biggest changes in this release:

  • Better cross platform support. Ardour now runs on GNU/Linux, OS X and for the first time, Windows.
  • JACK is no longer required, making it easier than ever for new users to get Ardour up and running (though JACK is still usable with Ardour).
  • The user interface has seen a thorough overhaul, leading to a more modern and polished experience.

ardour 4.0

Read more below for a more detailed summary of the changes ...

read more

by paul at April 18, 2015 09:20 PM

Libre Music Production - Articles, Tutorials and News

OpenAV Release System : The End!

OpenAV have just released a statement announcing the end of the OpenAV Release System.

The idea of this system was that source code for OpenAV projects would be released 12 months after their announcement with donations bringing forward release dates 1 month at a time.

The full statement can be read on the OpenAV website. The accompanying video is below.

by Conor at April 18, 2015 09:16 PM

OpenAV

OpenAV Release System : The End!

OpenAV Release System : The End!

TL;DR: OpenAV will not use the OpenAV Release System for future projects. One year after the OpenAV Release System was presented at the ZKM, it is time to say goodbye. In the rest of the article will be explained how the OpenAV Release System has its drawbacks. So why is OpenAV no longer interested in the release system – it worked right!?… Read more →

by harry at April 18, 2015 06:33 PM

Hackaday » digital audio hacks

DIY Bass Drum Microphone Uses Woofer Cone As Diaphragm

Anyone into audio recording knows that recording drums is a serious pain. Mic setup and positioning can make or break a recording session. One particular hurdle is getting a great sound out of the bass drum. To overcome this, [Mike] has built a microphone using an 8″ woofer in an attempt to capture the low-end frequencies of his bass drum. Using a speaker as a microphone isn’t a new idea and these large diaphragm bass drum mics have taken commercial form as the DW Moon Mic and the now-discontinued Yamaha SubKick.

The project is actually quite simple. The speaker’s positive terminal is connected to Pin 2 of a 3-pin XLR microphone connector. The speaker’s negative terminal is connected to the connector’s Pin 1. [Mike] made a bracket to connect the woofer to a mic stand, which in turn was cut down to position the woofer at bass drum height. The setup is then plugged into a mixer or pre-amp just like any other regular microphone.

[Mike] has since made some changes to his mic configuration. It was putting out way too hot of a signal to the preamp so he added an attenuation circuit between the speaker and XLR connector. Next, he came across an old 10″ tom shell and decided to transplant his speaker-microphone from the open-air metal rack to the aesthetically pleasing drum shell. Check out [Mike’s] project page for some before and after audio samples.


Filed under: digital audio hacks, musical hacks

by Rich Bremer at April 18, 2015 05:01 PM

Create Digital Music » Linux

RME Do Compact Audio with 4X Analog, Digital, MIDI

IMG_3139

There are plenty of fairly good audio interfaces out there. Decent-to-middling, yes. But if you’re picky about getting something really top-notch in terms of audio performance and stable low latency, that list gets a whole lot shorter.

Want it to be really compact? That list gets shorter still. “Pro” often translates to “rack mount” – but just because you want something light and small doesn’t mean you don’t want something serious.

RME is a brand that very often winds up on that short list. And their new BabyFace Pro I suspect ticks a lot of the boxes you want.

First, four is a very good number – as in four inputs, four outputs. A lot of boxes give you two of either of those, but that often finds you running out of I/O. Others give you more – which you often never use. Four inputs cover a lot of recording applications without needing a mixer. A separate headphone out means you can create listen to a monitor or cue mix, or simply have two more line out channels (say, for rear speakers).

And the BabyFace Pro has a lot of other stuff that other boxes leave out:
MIDI.
Digital (ADAT and S/PDIF).
Hardware meters (so you can actually see your levels easily).

IMG_3142

IMG_3141

Now, MIDI isn’t hard to come by, but it’s nice to have. The I/O configurations make loads of sense, too. You get headphone jacks for both mini and jack plugs – with both high and low impedance, for whichever cans you have handy. You get inputs for both line level and high impedance (for instruments). You get real XLRs for your mics, even though it’s compact. (Only outs 3/4 are on minijack, but that’s not really an issue, I think.) And the form factor is lovely.

The only disadvantage I can see is, it’d be nice to have four line outs and then headphones switchable to 1/2 or 3/4, which is not what you get here – so quad fans may want to go elsewhere.

The BabyFace Pro is a USB interface (USB2/USB3), but RME is one company that seems to get that right. They really do produce devices that can clock reliably, thanks to what they explain is smart jitter production, and I can’t think of a single brand that has their sterling reputation for low latency performance on OS X and Windows and iOS/mobile and (though they don’t mention this) Linux. (Speaking of Linux: a friend actually tried his Linux box on the Messe show floor and verified plug-and-play operation and terrific performance. And, hey, don’t you want to invest in a box that will work with everything?)

IMG_3140

The marketing for this interface is a bit funny – with the slogan “reengineered, not remastered” and some pretty generous assumptions that customers will understand, for instance, why having an FPGA is important. But skipping that, I think this will top up that short list of really good audio interfaces. You can sign up now and they’ll let you know when it arrives.

http://babyface.rme-audio.de/

The post RME Do Compact Audio with 4X Analog, Digital, MIDI appeared first on Create Digital Music.

by Peter Kirn at April 18, 2015 02:21 PM

April 16, 2015

Libre Music Production - Articles, Tutorials and News

Guitarix updates

Things have been moving along for the Guitarix project lately. First of all, they have redesigned their website and forum. Also, their website now has a new domain at guitarix.org, while the source code is still hosted at sourceforge.

by Conor at April 16, 2015 08:01 AM

April 15, 2015

OpenAV

Post LAC-2015

Post LAC-2015

Wow what a wonderful time at LAC – so many interesting projects, passionate developers and enthusiastic users. Once again, LAC has proven to be a fantastic place to talk with everybody, get all important user feedback, and last but not least – enjoy a beer with the linux audio community. AVTK presentation OpenAV presented on the AVTK UI toolkit, which… Read more →

by harry at April 15, 2015 12:24 PM

April 14, 2015

Libre Music Production - Articles, Tutorials and News

LAC 2015 photo gallery & videos now online

Thanks to Rui Nuno Capela and his trusty camera, you can now check out photos from this year's Linux Audio Conference. You will find the photo gallery over at rncbc.org

He has also uploaded videos of LAC's Linux Sound Night. You can check out these videos on Rui's youtube account.

by Conor at April 14, 2015 08:29 AM

Hackaday » digital audio hacks

Cyclist Pulled Over for Headphones Builds Neighborhood Shaking Bicycle Boombox

Riding around with headphones on is not the safest of things; those people are trying to could hit you! [Victor Frost] was actually pulled over for doing it. Although the bicycle police didn’t ticket him, they did push him over the edge to pursuing a compromise that lets him listen to tunes and perhaps still hear the traffic around him.

The build puts 200 Watts of audio on his rear luggage rack. He used a couple of file totes as enclosures, bolting them in place and cutting one hole in each to receive the pair of speakers. The system is powered by two 6V sealed lead-acid batteries which are topped off by a trickle-charger when the bike is parked.

Looking through this log we almost clicked right past this one. It wasn’t immediately apparent that this is actually version four of the build, and these are completely different spins each time. The top-down view of plastic-tacklebox-wrapped-v3 is sure to make you grin. Video overviews of the first two versions are linked in [Victor’s] details section of the project page linked at the top of this post. The progress is admirable and fun time digging through. They’re all quite a bit different but bigger, better, and more self-contained with each iteration.

Okay, okay, maybe this isn’t going to shake the neighborhood… until he adds a Bass Cannon to it.


Filed under: digital audio hacks, transportation hacks

by Mike Szczys at April 14, 2015 05:00 AM

April 10, 2015

Hackaday » digital audio hacks

Logic Noise: More CMOS Cowbell!

Logic Noise is an exploration of building raw synthesizers with CMOS logic chips. This session, we’ll tackle things like bells, gongs, cymbals and yes, cowbells that have a high degree of non-harmonically related content in them.

Metallic Sounds: The XOR

I use the term “Non-harmonic” in the sense that the frequencies that compose the sound aren’t even integer multiples of some fundamental pitch as is the case with a guitar string or even our square waves. To make these metallic sounds, we’re going to need to mess things up a little bit, and the logic function we’re introducing today to do it is the exclusive-or (XOR).

An XOR logic gate has two inputs and it outputs a high voltage when one, and only one, of its inputs is at the high voltage level. When both inputs are low or both inputs are high, the output of the XOR is low. How does this help us in our quest for non-harmonic content? It turns out that the XOR logic function is the digital version of a frequency mixer. (Radio freaks, take note!)

Ideal frequency mixers take two input frequencies and output the sum and difference of the two input frequencies. If you pipe in 155 Hz and 200 Hz, for example, you’ll get out the difference at 45 Hz and the sum at 355 Hz.

Because we’re using square waves and an XOR instead of an ideal mixer, we’ll also get other bizarre values like 2*155 – 200 = 110 Hz and 2*200 – 155 = 245 Hz, etc. All said, the point is that we get out a bunch of frequencies that aren’t evenly divisible by one another, and this can make for good metallic sounds. (And Dalek voices, for what it’s worth.)

The 4070: Quad XOR

4070_pinout

Which brings us to our logic chip du jour. The 4070 is another 14-pin wonder, just like the 40106 and the 4069UB and the power and ground pins are in the same places. Since an XOR gate is a three-pin deal, with two inputs and one output, only four XORs fit on the 14-pin chip instead of six inverters.

By now, you’re entirely used to the 4000-series logic chips, so there’s not much more to say. This is a great chip to add sonic mayhem very easily to your projects.

Frequency Modulation with XOR: More Cowbell!

Let’s make some metallic noise. The first step is to mix two oscillators together. Whip up two variable-frequency oscillators on the 40106 as we’ve done now each time, and have a listen to each individually. Now connect each output to the inputs of one gate of an XOR in the 4070. As promised, the resulting waveform is a lot more complex than either of the two inputs.

Now tune them around against each other and listen to all the strange frequency components created as the sums and differences slide in and out. Cool, no? Here’s a bonus video that you can skip, but that demonstrates what’s going on with the frequency mixing.

Two-diode VCA

After a couple of minutes playing around, you’ll start to realize that this sounds nothing like a cowbell. We’ll need to shape the volume of the sound in time to get anywhere, and this means another step in the direction of “traditional” synthesizers. We’ll build up a ghetto voltage-controlled amplifier (VCA) and drive it with the world’s simplest envelope generator.

An active VCA takes its input signal and either amplifies or attenuates it depending on the control voltage (CV) applied on another input. When the control voltage is high, more of the sound gets through, and when the CV is zero, the output is ideally silent. Building a general-purpose VCA is a bit out of scope for our needs, so let’s just cobble something together with a few diodes.

This circuit works by cheating, and works best with digital logic signals like what we’ve got. When the input from the XOR is low, diode D1 conducts in its forward direction and all of the control voltage signal is “eaten up”, sunk into the output of the XOR chip.

Conversely, when the XOR is high, diode D1 is reverse-biased and blocks the CV, leaving it nowhere to go except through diode D2 and out to our amplifier. The resistor needs to be large enough that the XOR can sink all of its current, but otherwise the size is non-critical.

cap_square_to_pulesNotice what’s happened here. The voltage at the output is no longer the GND to VCC of our logic circuit, but instead ranges only from GND to the control voltage (minus a diode drop). So if we want to make a quieter version of the XOR input, we just lower the control voltage. It’s a simple voltage controlled attenuator. Now we just need to create a voltage signal that’s got something like the amplitude contour of a cowbell.

Remember how we converted square waves into trigger pulses by adding a series capacitor? The resulting voltage had this steep rise and exponential trail-off.

If we add in another capacitor, we can lengthen out the decay. And then while we’re at it, we can add in a potentiometer to control the rate of that decay.

diode_vca_with_envelope_no_diodes

Capacitor C1 converts the square wave into a pulse and charges up C2 very quickly, applying the positive voltage to the input of our VCA. The charge on C2 drains out through the variable decay potentiometer.

This simple circuit actually works well, but has one shortcoming. For long decay times, as illustrated above, the decay gets cut off when the control square wave goes low. If you only want short percussive hits, the simple circuit is enough. If you’d also like longer decays, you’ll need to add a couple diodes to chop off the negative part of the control voltage spikes.

diode_vca_with_envelope_sustain_no_diodes

Now that only periodic positive spikes are getting though to our decay capacitor, we have a nice variable-rate exponential decay voltage envelope. Here’s how it looks on the scope (with some extra capacitance slowing down the attack — envelope_with_xor_drumit might have been connected to the laptop soundcard). You can clearly see the control-voltage envelope chopped up by the diode action and the XOR’s output.

Putting the XOR frequency-modulated sounds through the two-diode VCA that’s driven by our quick and dirty envelope generator gets us a percussive metal sound.  But it it cowbell?  We still have to tune the oscillators up.

The classic, love-it-or-hate-it, cowbell sound of the 1980’s has to be the Roland TR-808. And if you look through the 808 service manual (PDF download) you’ll see that it uses two square waves from a 40106 chip simply mixed together. We’re improving on that by XORing, but we can still learn a bit from Roland. In particular, they tune their oscillators to 540 Hz and 800 Hz.

Because we’re XORing two oscillators together, our peaks come in at the sum and difference frequencies. This means that we’ve got to solve X + Y = 800 and X – Y = 540. Grab pencil and paper, or just believe me that you’ll want to tune up the individual square wave oscillators to 130 and 670 Hz respectively. At least, to get something like that classic cheesy cowbell sound.

Amplification Aside

We’ve been trying to stick to the use of purely CMOS logic chips here, but this session we broke down and used a transistor. The reason is that the audio input on our laptop insists on a bipolar, centered audio signal. In contrast, the output of our “VCA” sits mainly at zero volts with very short peaks up around one volt. The input capacitor in the laptop is charging up and blocking the VCA’s diode output. Boo!

Indeed, we can’t use our old tricks with the 4069UB as an amplifier here either. The 4069UB works great for signals that are centered around the mid-rail voltage, but distorts near either GND or VCC. Unfortunately, we’d like our quiet drum sounds to taper off to zero volts rather than the mid-rail, so we’ll have to use something else to buffer our audio with.

quick_and_dirty_transistor_amp.sch

The solution is to buffer the output with something suited to this unipolar signal, and the simplest solution is a plain-vanilla NPN transistor hooked up as a common-emitter amplifier common-collector amplifier. This configuration is a very useful analog buffer circuit; it puts out almost the same voltage as the input, but draws directly from the VCC rail and will certainly handle any sound card’s input capacitor. We used a 2N3904, but a 2N2222 or BC548 or whatever will work just fine.

Cymbals

Cymbals and similar metallic percussion instruments were pretty tricky to synthesize in the early days of drum machines. Until the LinnDrum introduced sampled cymbals, most just used a shaped burst of white noise. The aforementioned TR-808 used six 40106 oscillators linearly mixed together to approximate white noise. Again, we’ll improve on that by running it all through XORs with the result being somewhere between many oscillators and pure noise depending on how you set the oscillators up.

The inspiration for this circuit is the fantastic Synbal project (schematic in PDF) from “Electronics & Music Maker” magazine in 1983. It’s a much more complicated affair than what we’re doing here, but if you look at the left-hand side of the schematic, that’s the core. (If you’re copying the Synbal’s fixed frequencies for the oscillators, note that he uses 0.01 uF capacitors and we use 0.1 uF caps. Divide the feedback resistors by ten accordingly.)

cymbals.schThe trick to the cymbal circuit is making a lot of oscillators. We’ll hook up six of them, finally fully fill our 40106 chip. Then combine any pair in an XOR, take the output of that XOR and combine it with another oscillator. You’ve now got a complex oscillator that’s used up three 40106 oscillators and two XOR gates. Repeat this with the remaining oscillators and XOR gates and you’re nearly done. Connect the final two XOR outs through resistors to the output.

As with the cowbell circuit, this circuit can be made to sound “realistic” by picking the different component frequencies just right and tweaking the decay. We think that it makes a pretty decent hi-hat sound with a couple of the oscillators pitched high (1 kHz and up). On the other hand, if you’re into noise music you can skip the VCA altogether and tune the oscillators to similar, low frequencies. You get a vaguely metallic, almost rhythmic machine drone. Not to be missed.

Extensions

We’ve snuck it in under the guise of making a cowbell sound, but the quick-and-dirty VCA here is also useful for modulating most of the synth voices we made in the first few sessions. We went for a percussive attack by using a capacitor to couple the driving square wave to the VCA, but there’s no reason not to use a variable resistor in its place to charge up the capacitor more slowly. If you do this, note that the attack and decay potentiometers will interact, so it’s a little quirky, but what do you want for two diodes anyway? Also note that any other way you can think of delivering an interesting voltage to the junction of the two diodes is fair game.

The XOR-as-frequency-mixer technique is pretty great, but you can also get a lot of mileage by using the XOR as a logic chip. Combining different divided-down clock outputs (from a 4040, say) with XORs makes interesting sub-patterns, for instance. And we’ll get more use out of the XORs in two sessions when they’re coupled with shift registers.

Next Session

We’ve got a whole lot of possibilities by now. We’ve got some good, and some freaky, percussion voices. We’ve got a bunch of synthesizer sounds, and if you recall back to the 4051, we’ve got a good way to modulate them by switching different resistors in and out. It’s time to start integrating some of this stuff.

If you’re following along, your homework is to build up permanent (or at least quasi-permanent) versions of a couple of these circuits, and to get your hands on at least two 4017 decade counter chips. Because next week we’ll be making drum patterns and introducing yet one more way to make music.


Filed under: digital audio hacks, Featured, musical hacks, slider

by Elliot Williams at April 10, 2015 02:00 PM

LinuxMuso

Learning With Memory Mapping or Mind Mapping

Memory mapping is a powerful technique to stimulate your thoughts, and a memory power boost.
The technique starts by drawing associations on a piece of paper. first start with the key idea or word in the center and write close by the sub ideas, then connect these with a line.

This is a powerful memory enhancing technique because it operates on a number of levels at once

Diagramming information coverts the incoming mass of data into concepts and images that are meaningful to you.
It draws on the left-brain verbal, analytic abilities and the right brain spatial, visual abilities, reinforcing facts and data simultaneously in the memory circuits on both sides of the brain.
By jotting down key ideas and indicating connections between them, you personalize the data, arranging them in a way that is meaningful to you.
Because there is always space for further ideas and connections you are prodded to keep looking in new directions.
Since the key elements are all right there on one sheet it’s easier for you to see important connections.
Consciously processing the information– rather than passively listening or reading– makes it more likely you will remember it.

Conclusion: Memory or Mind mapping is a very efficient memory boosting technique that works by involving all senses. I used it in college where I had only one class C grade and mostly B+ and A+


by pete0verse at April 10, 2015 01:27 PM

ardour

Ardour on Windows: help make it real

Thanks to several years of slow and steady work, Ardour now compiles and runs on all versions of the Windows operating system. For several months, we've offered ready-to-run versions via our nightly build site . We're almost ready to release Ardour 4.0, which will make it the first major release of the program that could run on Windows. We'd love to see Ardour running on Windows more and more.

BUT ...

read more

by paul at April 10, 2015 01:49 AM

April 09, 2015

OpenAV

LAC 2015 – OpenAV talk :)

LAC 2015 – OpenAV talk :)

Hi All, A quick note that OpenAV will be talking about progress, projects, and AVTK at 10:45 German time (thats GMT+2). If you’re interested, do click in to the live-stream: AVTK + more talk Live Streaming link Questions can be posted via IRC using Webchat on Freenode. Will report back after LAC, Cheers!     Read more →

by harry at April 09, 2015 11:25 PM

Libre Music Production - Articles, Tutorials and News

How to use outboard gear in Ardour

There may be times when you are mixing that you want to, or have to, use external effects or processors. To do this in Ardour, we use inserts. So how do these work and how do we set it all up? Let's find out!

Why use hardware effects?

Using plugins is very convenient so why would you want to use a hardware unit? The two main reasons would be -

by Conor at April 09, 2015 03:42 PM

Arduino and MIDI out

Arduino.

Note! We have received some concern about connecting "directly" from the Arduino to the MIDI in port of a HW synth. Although we have done this successfully with several different HW setups, note that LMP does not take any responsibility for any catastrophic results of this tutorial. In the next part of this tutorial, "Arduino and MIDI in", we will introduce the optocoupler.

by admin at April 09, 2015 11:28 AM

April 08, 2015

LinuxMuso

Rubik’s Cube and Memory Techniques, Little Journey,Loci

As a Rubik’s cube fan there is the need to memorize algorithms to solve the blessed cube. I used the loci and little journey memory techniques to memorize this algorithm for solving the top corners of the cube. The algorithm goes like this U R Ui Li U Ri Ui L, first the upper layer is turned one twist, then a right side move is needed followed by a top inverse then proceed by turning the left side counterclockwise one twist. The second part of the algorithm is the same as the first all for the second right inverse move and last move which is a left turn. Now you could say” why is it so hard to remember 8 moves?” Well,I guess it is not, but the point of this experiment was to apply some learned memory techniques and put them to the test. I decided to take a little journey through the apartment as such, by the door of the office look up U to the vent, Look right into the bathroom R, look down to the basket Ui, Walk left backwards into the hall way Li, Look up to the vent U, Look at Dusty’s cage Ri, Look down, Look at the table L. This completes the algorithm which is the standard method for solving the top corners as explained in the booklet 7 step solution that is sold with the 3x3x3 cube. Myself, I combine two methods the one in the booklet and one I picked up from You tube on the net by some Chinese dude. Ni hao!
have fun !


by pete0verse at April 08, 2015 01:07 PM

April 07, 2015

LinuxMuso

Sound Synthesis Visually Explained.

I remember sitting in front of the Commodore 64 around 82 as a kid, trying to figure out the modular synthesis operations possible. With its then quite advanced sound chip, a MOS technology 6581. But it’s better known as the SID 6581. I have been getting busy with some electronics and Arduino stuff lately I might just make myself a midi sound module or such based on that chip. Anyway, it was at that time a big feature one up for it’s intended market audience, the hobbyists. I was somewhat left to my own device to figure things out, mind you we didn’t have Internet then so a trip to the library was the best thing to do. To my chagrin they did not have a single book on hand. So I just did what I do and fiddled with some parameters and quite possibly invented New Wave or Emo doom! Hahaha good fun that was. But this schematic will sum it up greatly for you, know fire up that Oberheim Bristol plugin or the many classics available on Linux and throw some switches and turn those knobs!L06_a_overview-synthblock-01


by pete0verse at April 07, 2015 08:09 PM

Libre Music Production - Articles, Tutorials and News

LMP Asks #7: An interview with Giovanni A. Zuliani

This month we talked to Giovanni A. Zuliani, the founder and project manager of Giada LoopMachine, a minimal, hardcore audio tool for DJs, live performers and electronic musicians.

Hi Giovanni, thank you for taking the time to do this interview. Where do you live, and what do you do for a living?

I, along with my team live in Milan (northern Italy), working as a freelancer/IT consultant. Lately I’m doing a lot of web development, with pleasure I must say.

by Conor at April 07, 2015 06:54 AM

April 06, 2015

Libre Music Production - Articles, Tutorials and News

OpenAV ArtyFX 1.3 Call for testing

OpenAV have just put out a call for testing for the up and coming release of ArtyFX 1.3. The code is available on OpenAV's github repository right now for anyone interested in testing them out. Be sure to file any issues via github.

There are two brand new plugins added to the plugin suite. These are -

by Conor at April 06, 2015 03:48 PM

OpenAV

LAC Preparations

LAC Preparations

Its been a while since the last post – so its a good time to let you all know what OpenAV has been doing. Harry will be talking at the LAC about AVTK, Fabla2, and a general OpenAV update – tune in if it suits your timezome, streaming links will be posted at a later date! ArtyFX 1.3 The next… Read more →

by harry at April 06, 2015 12:14 PM

April 05, 2015

LinuxMuso

GStreamer News

Outreachy Internship Opportunity

GStreamer has secured a spot in the May-August round of Outreachy (former OPW). The program aims to help people from groups underrepresented in free and open source software getting involved by offering focused internship opportunities with a number of free software organizations twice a year.

The current round of outreachy internship opportunities is open to women (cis and trans), trans men, genderqueer people, and all participants of the Ascend Project regardless of gender. The organization plans to expand the program to more participants from underrepresented backgrounds in the future. You can find more information about Outreachy here.

GStreamer application instructions and a list of mentored projects (you can always suggest your own) can be found at the GStreamer-Outreachy landing page. If you are interested on applying to an internship position with us, please take a look at the project ideas and get in touch by subscribing to our development list and sending an email about your selected project idea. Please include [Outreachy 2015] in the subject so we can easily spot it.

GStreamer participation in this round of the program it's being sponsored by Samsung's Open Source Group. Deadline for applications is April 10.

April 05, 2015 06:00 PM

April 03, 2015

Scores of Beauty

Using Lilypond in the Platonic Music Engine

I have a music generating software project, written in Lua, called the Platonic Music Engine (PME). It started its life as a simple tool to be used by several other projects I’m working on but has gotten so out of hand that it has taken on a life of its own. It aims to do everything musical that can ever be done. Part of this goal is to be able to generate sheet music in a variety of styles which is where Lilypond comes in.

The PME operates by taking an initial input from the user like a name, a number, or any random string of characters and turns that string into a piece of music, a single melodic line. This music is not constructed with any conventional aesthetic ideas in mind. In fact it’s basically random. Technically it’s pseudo-random which means that the music is deterministically generated such that while the initial seed produces the same music each time, the results appear to be random. In the original incarnation of this project the PME was only going to quantize the score to make it playable on the instrument of the user’s choice while altering the musical qualities of the original music as little as possible (ie, it would still sound random).

A friend convinced me that producing random sounding music probably was not the way to gain a large enough following that would be willing to pay the bills. I reluctantly agreed and began adding what I call style algorithms to the software which allow the user to manipulate that initial piece into sounding like any style of music ever devised. Or it will, right now I’ve only programmed in a few style algorithms like one that creates a very simple Bach-like Invention, one that creates serial music, another that creates guitar chords, and others which you can see here.

In addition to producing music in a wide variety of styles it also generates sheet music for those pieces. It does several styles of graphic notation (using LaTeX) as well as standard notation using Lilypond. You can see examples of all of these here and below.

An example score produced by the PME using minimal quantization, ie, maximal random-soundingness. (click to enlarge)

An example score produced by the PME using minimal quantization, ie, maximal random-soundingness. (click to enlarge)

http://lilypondblog.org/wp-content/uploads/2015/03/Full_Piano_Example.mp3Audio for the sheet music above.

An example of a graphical score. This one is an adaptation of Feldman's graph notation based on the sheet music above. It makes use of an entirely optional dynamic shading feature where the the darkness (light, medium, and dark) matches the volume (soft, medium, and loud).

An example of a graphical score. This one is an adaptation of Feldman’s graph notation based on the sheet music above. It makes use of an entirely optional dynamic shading feature where the the darkness (light, medium, and dark) matches the volume (soft, medium, and loud). (click to enlarge)

In the first example above a “minimal amount” of quantization was used. But that isn’t exactly accurate. What this means is that I told the computer to take a piece of music that has more notes (128 possible notes vs the 88 that are available on a piano), greater range of durations (32767 possible values vs the 15 in use in the software), and more gradations in volume (128 vs 12 used in standard notation—ppppp to fffff) and squeeze it down to fit what an actual human can play on an actual instrument. For this example I told the computer to use the full range of values that it had available for this instrument and to treat them all equally. A more “aggressive quantization” would have been telling the software to “weight” some of those values like only using the notes in the C-Major scale and of those putting extra emphasis on the tonic and dominants and not use the supertonic at all. And then only using p, mp, mf, and ff along with just eighth, quarter, and half notes and then on top of all that only using the middle two octaves of the piano. The user has a near limitless variety of options that they can pass to the PME in order to quantize the initial music into sounding like whatever they desire and that’s before applying any actual style algorithms to it.

Using Lilypond with the PME

The PME, when called upon to, can generate a Lilypond file representing the music that it has generated. Getting it to do this hasn’t been an easy process. We are all used to having complete control over our Lilypond files in order to get the exact look we want. In this project the software has to produce perfect or nearly perfect results without any intervention at all. Users cannot be expected to edit the resulting Lilypond files themselves. This is especially the case when the PME goes live in an online version that allows anyone without any music experience to generate music.

The good news is that Lilypond is up to the task. In fact I cannot imagine any other program being able to meet this requirement. There are other text-based engraving programs but none that produce such polished output. And of course the usual suspects of programs that have graphical interfaces are of no use here.

In order to produce good results I have had to put quite a bit of effort into the process of generating Lilypond files. The rest of this post will discuss some of the details of this process.

An interesting default feature of Lilypond is that it does not automatically split notes and rests across bar lines. It is expected that you, as the person creating the Lilypond file, will manually indicate how and when notes are split. In the case of the PME, many of the style algorithms do not generate music that is actually written with a time signature in mind—it’s basically long cadenzas. But I decided that it looks better, and makes it easier to read, if there are bar lines anyway. The user is allowed to choose any time signature they want in most style algorithms, but because most of these style algorithms do not generate music with any time signature in mind this creates a problem. Lilypond will just put a note at the end of a measure, with its full duration, regardless of how little sense it makes.

Fortunately Lilypond also gives you the means to disable this feature and will split notes and rests across bar lines in a fairly sophisticated manner. You simply add the following code in your \layout section:

\layout {
  \remove "Note_heads_engraver" 
  \consists "Completion_heads_engraver"
  \remove "Rest_engraver" 
  \consists "Completion_rest_engraver" 
}

Now Lilypond will split your notes across bar lines.

An example score produced by the PME using minimal quantization, ie, maximal random-soundingness

The first two measure of the score above. The snippet on the left shows the default Lilypond behavior, the snippet on the right shows Lilypond automatically breaking notes across bar lines.

The PME will generate music for any instrument (that it has a definition for—right now this comprises the instruments making up general midi plus a few others) and tries its best to make sure the sheet music is appropriate for that instrument. Besides making sure the music fits the range of the instrument it also attempts to generate the sheet music using whatever is standard for that instrument including the proper clef and transpositions.

One aspect of making music look good and natural for an instrument is the judicious use of ottava markings. Because the PME does not allow us the privilege of manually altering the Lilypond code, we either have to do without ottava markings or figure out a way to automatically add them from within the software. Originally I wrote a lot of code to calculate good spots to insert ottava markings based on the octave of the pitch. Unfortunately it was very difficult to generate good results, either the marking would come in too late (the pitch too high above/below the staff) or too early and there was little I could do about it without devoting a massive amount of time and code to tracking additional data.

Fortunately the amazing Lilypond community came to the rescue. I posted a question about automatic ottava markings to the Lilypond list and while Lilypond has no such built in capability it does provide enough of a customizable framework via its Scheme scripting engine to add these kinds of features. In fact David Nalesnik very generously created a Scheme function that would automatically insert ottava markings based on the number of ledger lines used instead of the octave of the note which had been my approach. This Scheme function allowed me to rip out a significant amount of code that I no longer have to maintain and update when changes are made and the results look even better. You can find a download link for this function here and see it in action in the first image above.

Similarly, dealing with enharmonic spellings has been a huge headache. This involved multiple tables for different keys and root notes. And whenever fundamental changes were made to the underlying infrastructure, I had to update all the tables and the code that processed them. It was an even larger and more unwieldy mess than the ottava situation above.

And then while looking through the mail list I came across a post by Peter Gentry describing some work he had done extending an existing Scheme function dealing with automatic enharmonic spellings. We discussed the issue, with several other people, and he ended up creating a function that handles automatically forcing the appropriate enharmonic spelling for whatever key you’re in. Here’s a link to my gitlab project page where you can download his code.

With this function in hand I was able to rip out all of my code which would try to use the appropriate enharmonic spellings in the sheet music. As it stands now the PME only generates Lilypond code with sharps (and naturals) and the enharmonic.ly function determines if they should be flats instead and makes that change. This has improved my code tremendously and produces perfect results.

In this example the notes sent to Lilypond were all naturals and sharps and the enharmonic.ly script converted the sharps to the enharmonically equivalent flats. This example also shows more aggressive quantizing starting from the same source material as above. (click to enlarge)

In this example the notes sent to Lilypond were all naturals and sharps and the enharmonic.ly script converted the sharps to the enharmonically equivalent flats. This example also shows more aggressive quantizing starting from the same source material as above. (click to enlarge)

One final example of Lilypond’s functionality and then I’ll wrap this up. One of the style algorithms I created is basically a musical version of John Cage’s Mesostic (also see below for an example from Cage where he found the spine “James Joyce” in Joyce’s Finnegans Wake) method of generating poetry from a source text. My style algorithm creates a standard score and then “finds” famous melodies within it like the opening theme to Beethoven’s Fifth Symphony or the theme from Bach’s Toccata. The sound file plays the source music very softly and when the found melody notes occur it plays them loudly providing a kind of aural analogue to the typographic method Cage employed. But then I also wanted to have the score make the found melody clear. Lilypond colors the melody note red and depending on the style chosen either makes the melody note always appear at the beginning of its own line (acrostic), be somewhere in the middle of its own line (mesostic), or just be displayed following standard notation practice (standard). Implementing this was trivially easy and did not involve much more than adding the line

red_note = { 
  \once \override NoteHead.color = #red 
  \once \override Stem.color = #red 
}

to my Lua code and then inserting the command,\red_note, into the appropriate spot while the software generates the Lilypond file.

whase on the Joint
           whAse
          foaMous
          oldE
            aS you

             Jamey
             Our
       countrY   
   is a ffrinCh
        soracEr this is

(Mesostic created by John Cage using “James Joyce” as the spine and Finnegans Wake as the source text to search through.)

The Musical Mesostic style algorithm using the acrostic formatting. The spine is a rather famous motif. (click to enlarge)

The Musical Mesostic style algorithm using the acrostic formatting. The spine is a rather famous motif. (click to enlarge)

http://lilypondblog.org/wp-content/uploads/2015/03/Mesostic.mp3A rather famous musical motif should be audible above the rest of the music.

The Future and Final Thoughts

I am already anticipating certain challenges to come. One of which is a generic method of indicating microtones using any tuning including those that divide the octave into any number of divisions (eg, 128-EDO, 1,000-EDO, 3,728.31-EDO). My thoughts are that once we’re past what can be done with the normal accidentals (quarter sharps, triple quarter sharps, and ignoring enharmonic spellings) that there will need to be a new approach. One that I’ve successfully experimented with is creating accidentals that show the percentage distance from the base note to what it sounds like. So if a note is 18.4% of the way between its root note, G, and the next note, A, then the accidental would be something like “184″ arranged vertically and attached to the G. Not that this would be particularly useful for a musician or that any instrument exists that can play in 1,000-EDO tuning, but that the sheet music will be accurate and still look good. This is the Lilypond code I used in my tests:

{
  \override Accidental #'stencil = #ly:text-interface::print 
  c' e' 
  \once \override Voice.Accidental.text = 
  \markup { 
    \override #' (baseline-skip . 1.2) \teeny 
    \column {1 8 4} 
  } 
  gis' e'
}
Proof of concept for a new accidental style for microtones. This is a G that is 18.4% of the way between a G and an A.

Proof of concept for a new accidental style for microtones. This is a G that is 18.4% of the way between a G and an A.

More work will need to be done to make this method work really well but I feel like I’ve demonstrated that it can be done. (Now no one should be surprised when a post to the mail list shows up with some questions about how to tweak this code.)

I am happy with the graphical scores that I am producing with LaTeX but feel confident that I could also produce them in Lilypond using postscript (or some other method?). This would make formatting the headers and title pages much easier and if done in a general enough fashion could be useful for other Lilypond users. If someone wants to help me understand postscript and how to integrate it into Lilypond then I’d love to tackle that issue.

Additionally, any composer who would like to see their works adapted for the PME or any musician who has an interest in seeing the works of any composer of any style from anywhere in the world from any time similarly adapted, please contact me. I want to include every musical idea that has ever existed or will ever exist and I really enjoy the process of collaborating with others in making these things happen. An example here would be to take the very simple Bach-like Invention style algorithm I’ve done and replace it with algorithms that more closely match what Bach actually did, like an algorithm for each Invention. Basically if there’s any piece of music or musical style you’d like to see algorithmized I am up for it.

At the end of the day Lilypond has proven itself capable of generating beautiful scores automatically without any human intervention. My software creates a Lilypond file and with some specific choices made by me and plenty of help from the community (both answering my questions and generously providing new Scheme functions), the resulting sheet music is both accurate and beautiful.

I remain further convinced that there is simply no other engraving product available that can produce these results at this level of quality. And I also remain certain that no matter what issue I have a solution is available thanks to the wonderful Lilypond community and the hard work all the developers have put into making Lilypond the powerful and flexible product that it is.


David Bellows studied composition at various schools in East Tennessee. He has written music for the concert stage and gallery openings and of late has been taking private commissions. He is currently in Bellingham Washington, US, and is looking for a place to live.

by David Bellows at April 03, 2015 07:00 PM

Create Digital Music » Linux

Bitwig Studio as Instrumental Pack: Cristian Vogel, u-he Deals

vogel

Bitwig Studio turns one year old this week, and they’re keen to use the occasion to convince you to try their software. But the pitch takes a different angle, focusing more than ever on the particular bundle of instruments and effects. There’s an artist pack by Cristian Vogel that makes it clear what’s possible (hello, granular) — and, ending on the 7th of April, deals on U-he instruments to sweeten the pot.

For soft synth lovers, it might be a haul of Easter candy that convinces you to bite.

We’ve already talked about my affection for Cristian Vogel: he’s an artist with a uniquely expressive sonic character, manifested in thumbprints on all his musical output.

And in a new creation for Bitwig Studio, he confirms some of my suspicions of the best potential of that software. When Bitwig Studio came out, many looked to it to be some sort of new direction for DAWs, in particular relative to Ableton Live, which it closely resembles. But the software, for now, doesn’t differ so much from other tools on the market.

Where Bitwig seems to come into its own is with its instruments and effects. If you’re looking for a bundle of new sounds wrapped inside its particular production workflow, in other words, it begins to come into its own.

In Vogel’s capable hands, that’s what you hear – granular and other effects producing a particular digital sound, like the white-pebble ice storms we’ve been seeing in Berlin the last few days. Watch:

Here’s what he writes about his contribution:

I mostly like the way that it’s a kind of modular effects rack with a lot more possibilities than first meets the eye. Actually, if you’re into making patches or modular synths, then you’ll get it straight away. You can wrap elements inside of other elements and control many parameters with just one turn. Really, you can imagine and invent all sorts of things… pretty wild combinations.

So after a few late nights of patching stuff up using only the built-in FX modules and some samples from my own archives, of course, I started to come up with unique inventions: like a kind of Bitwig-only-style crazy granulator. It chops up samples into little grains which turn into long clouds that rain down into a quantized dust and gets frozen into rainbow icicles … sort of thing.

0324_Wavoloid_06

0324_DeviceChain_06

0324_DeepFreeze_06

But Bitwig really want you to buy Bitwig Studio. And this week’s deal is pretty stellar, if you’re interested: you get u-he’s amazing ACE or Bazille free when you buy Studio this week.

Already own Bitwig Studio? There’s a US$50 gift voucher for you toward the purchase of any u-he plugin if you’re an existing user.

More on that developer:
http://www.u-he.com/

Cristian Vogel artist pack:
http://www.bitwig.com/en/community/artist.html

And details on the plug-in deals and 1 year anniversary:
http://www.bitwig.com/en/bitwig_1year.html

Are you using Bitwig Studio? Care to share impressions or tips? Let us know in comments.

The post Bitwig Studio as Instrumental Pack: Cristian Vogel, u-he Deals appeared first on Create Digital Music.

by Peter Kirn at April 03, 2015 03:58 PM

April 02, 2015

PipeManMusic

Do Something!

I'm a very opinionated person and I don't think that there is much I can do about that. Most of the time I try to not force my personal opinions on people, some of my friends and family might disagree, but I do honestly try. I like to think most people arrive at their opinions honestly and they represent a perspective, however different than mine, that is informed by things I might not be able to understand. I do know that my opinions on things have changed or maybe even evolved with time and I'd like to think we are all on a path headed towards our dreams. Maybe at different points on the path but still on a path. If I can help someone down the path with me, I try to do it. What I won't do is push someone to make ground on something by force.

In my own head I don't think I have a single personal philosophy that guides my life. Most of the time I feel like I'm drowning in my own self doubt. However, I do get put into the position of offering advice on peoples lives more than I'm comfortable with. Most of the time I just try my best to nudge people in a positive direction.

Lately however, I've been giving more and more thought to what I would call my personal brand of guiding wisdom. Now I obviously don't have the answer to eternal happiness, world peace or even how to not annoy the crap out of everyone by accident. The reality is, I'm pretty useless at making other peoples lives better most of the time, despite my grand ideas for changing the world.

What I do know is that when I'm at my most depressed or discouraged that I can always dig myself out. Even if it feels at the time like I never will. I don't have a magic silver bullet but I do know that every day I can chose to do at least one thing that makes my life or the life of those around me better and I think that mostly sums up my approach. As I've thought about it, I've boiled it down to something fairly concise.

"Do Something"

What I mean by that is you might not be able to control everything that happens to you and you also might not be able to control the way you feel about it. What you can do is move yourself down the path. Sometimes it's a moon surface leap and sometimes it's crawling through glass, but progress is progress. No, this won't guarantee your bills will get paid, you will save your marriage or heal a childhood pain. It might not even make you feel better. What it will do is put you a little closer, bit by bit.

If you are like me, most things feel overwhelming. I can be pretty hard on myself. I once told someone, "You can't say anything to me more hurtful than what I've said to myself." I think it might be one of the most honest things I've ever said. What I have found though that helps me more than anything, is doing something. Anything. As long as it's a positive step in the right direction. Even if it's just one small step with a million more to go, it's one step closer to my final destination.

No matter how small the gesture it can at least help you get into a better head space. It could be something for yourself, like getting chores you've been avoiding knocked out or something huge like finally telling someone how you care about them. You don't even have to do it for yourself. Sometimes when I'm at my lowest it helps to think about the things I wish others where doing for me at that moment and do it for someone else. One example is, for my own narcissistic reasons, I really like things I post to social media to get liked by my friends and family. Sometimes a post that I feel really strongly about or connected to will get almost completely ignored and it will send me into a tailspin of self doubt. In all likely hood there are multitudes of reasons people didn't take the time to click "like", and most are probably not related to me or my personal feelings. So, even in this silliest of first world problem situations, I try to reach out to others, click like on things my friends post or leave a positive comment. I would never do this disingenuously. I'm always clicking like or give a positive comment to something I actually like. I'm just trying to go a little more out of the way to make someone else feel good.

Now, does this achieve anything measurable. Most of the time no. Most of my friends are likely unaware I do this. Does it suddenly make all my neurotic obsession over whether people like me go away? not at all. What it does though is put me at least half a step closer to feeling better and more often than not it's enough to give me a clear head to see the next step I need to take. Sometimes that next step is one of those moon surface leaps that I can't believe I didn't take before.

Don't get me wrong, I don't hinge my day to day feelings on these silly little acts. Mostly I've learned about myself that I really like the feeling of creating something so I try to focus on those kinds of activities. I have loads of hobbies and things that I do that keep me moving forward. I think those count too. What I try not to do is sit around and think of all the things I should be doing and know for sure I won't do. I'd rather focus on the things I can do than the things I can't.

So now I think I can feel a tiny bit more comfortable in offering someone advice. Just "Do Something." As long as it's positive progress, it's worth it. No matter your situation, you can at least do something to make it better. No matter how insignificant it might seem at the time. I even keep a small daily journal where I try to write down the positive things I did that day. I also write some of the negatives but as long as there is at least one positive, it helps.

So?!?!

Do Something!

That's the best I've got.

by Daniel Worth (noreply@blogger.com) at April 02, 2015 07:45 PM

April 01, 2015

Nothing Special

What a difference a new release makes (or: LTS vs Latest)

I've been sticking with Ubuntu's LTS releases since I got out of school mostly because I don't have the time anymore to do a monolithic upgrade and then fix everything it breaks afterward.  I've actually been championing them, because changing your whole system every 6 months is just asking for trouble. But I've been running 14.04 on this new Mac Book Pro at work and haven't been wowed by the hardware support. I think LTS is perfect for my 6 year old Thinkpad that has had full linux hardware support OOTB for 5 years or so, but with the latest greatest hardware, you need the latest greatest software.




As usual the kernel has a lot of driver improvements already, but using 3.13 or whatever kernel 14.04 was stuck on didn't have them. The machine was working enough to get my tasks done but I had wifi disconnects very very frequently, I can't hot swap mini-display ports for my second monitor, and several other inconveniences reign. I'd tried just updating the kernel in 14.04 but I think I needed to recompile glibc and/or a whole bunch of other things for it to really work and I haven't had time to go figure that out either. Amazing what I'll put up with just so I can use i3wm.

So yesterday I couldn't take it any more. I wanted some of those fixes in the recent kernels. So I did the monolithic upgrade to "Utopic". I did it through the terminal (sudo do-release-upgrade) and everything went quite smoothly. It took most of the day, but already I feel like it's better (placebo effect?). 14.10 is using kernel 3.16 and next month I'll go to 15.04 when its realeased which will have kernel 3.19 (which should at least fix thunderbolt/midi display hot swapping if I understand correctly).

Oops, spoke too soon. Wifi just dropped. (Which, by the way, happens to the guys actually running OSX on theirs too, we've got weird wifi issues at the office). So here's hoping 3.19 will be the bee's knees. Fingers crossed.

by Spencer (noreply@blogger.com) at April 01, 2015 11:08 AM

March 31, 2015

Create Digital Music » open-source

Now littleBits Modules Play with MIDI, USB, CV: Videos

Hirumi_IMG_0091LR

littleBits’ Synth Kit began as a lot of fun. Snap together small bare boards connected by custom magnets, and you can create basic synthesizers, or mix and match more exotic littleBits modules light light sensors. No soldering or cable connections are required.

But while you could use various littleBits components, your options were comparatively limited as far as connecting to other gear. That changes today with the release of new modules for MIDI, USB, and analog Control Voltage (CV), ranging $35-40 each.

There are three modules, each made in collaboration with KORG:

You can also buy a US$139.95 “Synth Pro Pack” that includes two of the CV modules, a MIDI module, a USB module, mounting boards, and cables.

propack

Let’s look at the modules one by one, then see what they can do:

IMG_5669_03_LR

MIDI

Costs US$39.95. This is the most useful of the three, to me, and the easiest no-brainer purchase if you’ve got a Synth Kit. You can route MIDI in and out of a littleBits rig to any other MIDI hardware – though you have to choose one or the other, by setting the single minijack to either “in” or “out.” And you can run MIDI in and out over micro USB to a computer. (The module operates driver-free, or you can install an optional driver from KORG – probably only if you’re on Windows would you want to do that.)

In effect, this module also works as a littleBits CV-to-MIDI converter, translating any analog input from a module to MIDI messages.

Applications: you can now use littleBits sensors or sequencers to play any MIDI instrument. Or you can play your littleBits rig using a MIDI controller or your computer sequencer.

IMG_5657_02_LR

Control Voltage

Cost: US$34.95. The CV module is basically the same idea, but with CV instead of MIDI. It couldn’t be simpler: you get one CV in, one out. Remember that CV connection for “littleBits” on KORG’s SQ-1 sequencer? Now we get to see it in action.

Now, because littleBits is already built around control voltages, this is a bit of fun. littleBits modules become a toolkit of various sensors and the like, as outputs to gear. And anything you have that generates control voltage – like a modular rig, for instance – now can be used to play synths you build with littleBits. Of the two, I suspect the former is more interesting than the latter, just because if you have a modular rig already, you can build something quite a lot more interesting and powerful than with littleBits. On the other hand, littleBits has all sorts of interesting motors and sensors, and if you don’t want to muck around with Arduino and the like, you can now snap together some strange sensors and quickly connect them to your modular rig with a $35 module.

IMG_5670_02_LR

USB I/O

Costs US$34.95. USB I/O handles just audio. But you can both route audio from your computer and to your computer. That makes it easier to record what you’ve made with your Synth Kit, if you don’t have an audio interface handy.

Also, because you can use an audio stream for control voltage, you can use the USB I/O kit to control modules.

So, in other words: the MIDI and I/O modules make it easier to integrate synths you’ve built with your computer and/or MIDI gear. The CV module I think will be most useful as a way of making strange new inputs for a modular synth.

Also interesting: today, littleBits has a bunch of partner videos showing off how these modules interoperate with other products. For instance, here’s Tony Rolando, of modular maker MakeNoise:

Or Peter Speer, showing some ideas for how to build interesting synths:

Patch ideas with the new littleBits Synth modules from Peter Speer on Vimeo.

Lysandre Follet shows how you might add a littleBits synth instrument to a larger Eurorack modular setup:

Icaro Ferre from Spectro Audio uses his software CV Toolkit to demonstrate how that powerful tool can be used to control other gear via CV. And… well, really, this is relevant to anyone interested in that software whether or not you want to use it with littleBits:

Cycling ’74 shows what they can do with Max/MSP:

Here’s a video Theremin:

And here’s the application I thought was sort of most cool, which is using the littleBits sensor modules to quickly interface with software:

More information:

Introducing: MIDI, CV, and USB I/O [littleBits Blog]

modularlittle

synthstuff

Bonus: It has nothing to do with littleBits – though it’s relevant to DIY. I love this DIY bow interface created by Peter Speer, which I stumbled across again while looking at him:

Euro bow interface prototype from Peter Speer on Vimeo.

The post Now littleBits Modules Play with MIDI, USB, CV: Videos appeared first on Create Digital Music.

by Peter Kirn at March 31, 2015 04:48 PM

March 30, 2015

Scores of Beauty

Managing Alternative Fonts with LilyPond

Oh my, it’s been quite some time since my last post – fortunately there has been at least a guest post recently and there are more in the pipeline. I have been working hard under the hood and there are exciting things going on, but nothing was in a state to be presented here. Now finally I can break the silence and write about one of the topics, at the same time providing some glimpses into other parts of current development in LilyPond’s ecosystem.

As you know (if not: it was announced in “LilyPond’s Look & Feel”) Abraham Lee has enabled LilyPond to use alternative notation fonts, and he already has provided an impressive range of fonts with various styles that are available from fonts.openlilylib.org. In a two part series I’ll introduce you to this new world of stylistic variety that has become possible through Abraham’s work and that openLilyLib makes easily accessible now.

Today we’ll take a first tour through LilyPond’s font handling in general – as it has originally been designed, as it has been until now, and as it will be from now on. In the next post I’ll take the presentation of “Arnold”, a new font Abraham created upon my suggestion, as an opportunity to dig slightly deeper into this topic and show you the tools provided by openLilyLib in some more detail.

Accessing Alternative Fonts – Then and Now

As mentioned in my earlier announcement LilyPond wasn’t originally designed to use alternative fonts, and this was one of the more severe limitations with LilyPond. When I had the opportunity to talk with representatives of several major publishing houses the option of integrating custom fonts to achieve their house styles was one of the first questions that usually arose. When Simon Tatham created the first available replacement font one had to actually exchange the font files so LilyPond wouldn’t even notice that the font had changed. This is a somewhat hacky and non-trivial approach that is described on the font’s home page.

As of Lilypond 2.19.12 this changed, and for the current stable version 2.18 there is a reasonably simple patch to make it work. Now there is a single function available to select any compatible notation font that has been copied to LilyPond’s font directories. The following example simply shows the default values.

\paper {
  #(define fonts
    (set-global-fonts
      #:music "emmentaler"
      #:brace "emmentaler"
      #:roman "Century Schoolbook L"
      #:sans "sans-serif"
      #:typewriter "monospace"
      #:factor (/ staff-height pt 20)
  ))
}

This syntax allows you to choose which fonts to change and which ones to leave at these default values, so the following would be sufficient to switch the notation font (general and brace) to the LilyJAZZ font:

\paper {
  #(define fonts
    (set-global-fonts
      #:music "lilyjazz"
      #:brace "lilyjazz"
      #:factor (/ staff-height pt 20)
  ))
}

The process of getting and using the alternative fonts is described in detail on the documentation page on fonts.openlilylib.org. However, getting the fonts to be recognized in the first place is still a little bit awkward, as you can’t simply “install” the fonts to your operating system but have to copy them inside LilyPond’s own font folders. Concretely you have to:

  • Download the archive file for a font from the website
  • Extract it to disk
  • Copy the font files from two folders to two folders inside your LilyPond installation

As this depends on the actual LilyPond installation you’ll have to repeat that step for any additional LilyPond installation you may have (and developers can have numerous different builds in parallel to test different features), and also you’d have to repeat this every time you update LilyPond. The issue can be slightly alleviated by creating symbolic links instead of physical copies inside the LilyPond installations – yet this has to be done again each time too.

Accessing Alternative Fonts – The Future

Well, all of this was yesterday – but the future is bright :-) .
Keeping your collection of alternative fonts up-to-date has become a breeze now, and using these fonts has become much simpler and even more powerful as well, thanks to two of my latest projects.

Automating Font Management

With install-lily-fonts managing a repository of alternative fonts has become a near-automatic process. This tool maintains a local repository and a catalog of fonts and versions and uses this to automatically detect any new or updated versions on the server. It then downloads new archives if necessary and “installs” them to one or multiple LilyPond installation(s). So regularly running this program (at least after updating or installing LilyPond versions) is all you need to make sure that you always have the complete and newest set of alternative fonts available for use with LilyPond! I think this should really encourage anybody to experiment with and use this newly available stylistic variety in music engraving.

So far I haven’t prepared an official “binary” release or included it into a Python repository, and I haven’t had the opportunity to incorporate this functionality in Frescobaldi (any help with this would be greatly appreciated). But you can download/clone/fork the tool from its repository on Github. Please visit that site for a more detailed description too.

Note: unfortunately I haven’t got any assistance yet to deal with this process on Windows too. So I assume that currently the program only works on Mac and Linux. If anyone has experience with handling symbolic links (or circumventing the need for that) on Windows with Python I’d be more than happy to accept help to make the tool available for all.

A New Library To Use Alternative Fonts

Now that we have the set of fonts available we also want to actually use them in scores. I’m pleased to tell you that this has become even more simple and powerful with the new interface provided by openLilyLib. Today I’ll only give you a short “teaser” about this, and in the next post I’ll go into more detail and present you the new functionality as well as differently styled engraving examples.

The original idea of setting up openLilyLib had been to have a place to store snippets of useful LilyPond code, similar to the official LilyPond Snippet Repository but not depending on a specific LilyPond version and having a Git interface for easier collaboration. Quite soon it was improved as to be includable – you can directly \include modules from openLilyLib instead of having to copy & paste code from the LSR and integrate that into your own files. But right now openLilyLib is undergoing a fundamental redesign, and the new Font Interface I’m presenting today is part of it. So you will see examples of the new way of using LilyPond extensions, but I won’t go into any detail about that new infrastructure as it is not ready for a general release and proper announcement (of course the readers of Scores of Beauty will be the first to know about any breaking news ;-) ). All you have to know now is that in order to use the new font interface you need the latest version of openLilyLib and make its root directory and the ly directory inside available to LilyPond’s include path.

The following code, inserted at the top of the input file, will change the notation font to the Improviso font:

\include "openlilylib"
\useLibrary Stylesheets
\useNotationFont Improviso
Improviso default appearance (click to view PDF)

Improviso default appearance (click to view PDF)

  • The first line loads the general openLilyLib infrastructure, and I think putting this at the top of the input file will become a second nature for most, just as it is a no-brainer to start each file with a \version statement.
  • The second line loads the “Stylesheets” library – the new openLilyLib is organized around the concept of coherent libraries. The “Stylesheet” library so far only implements the font stuff but we plan to do much more with it in the not-too-distant future.
  • The last line finally switches the notation and the brace font to Improviso.

Maybe you noticed that the example doesn’t look like LilyPond’s default output with just the font replaced? Well, this is part of the additional horsepower I promised and that we’ll take a closer look at in the next post. So stay tuned …

Investigating installed fonts

So you know how easy it is to switch the notation font to one of the numerous alternatives that are available by now. But what if you just don’t have the list of installed fonts at your fingertips? Well, you might go to the font website to have a look, or you might investigate LilyPond’s font folder (if you know where this is). But fortunately openLilyLib provides a much more convenient way, and this is the last item for today’s post before I let you wait for the next post with more details and examples about the new fonts. As you may already guess the “Stylesheets” library provides a command for that purpose:

\include "openlilylib"
\useLibrary stylesheets
\displayNotationFonts

which will produce something similar to the following on the console:

Installed notation fonts:
OpenType:
- arnold
- beethoven
- cadence (no brace font)
- emmentaler
- gonville
- gutenberg1939
- haydn
- improviso
- lilyboulez (no brace font)
- lilyjazz
- paganini (no brace font)
- profondo
- ross
- scorlatti (no brace font)
- sebastiano

OK, next time we’ll see more examples and you’ll get the opportunity to taste some of the power that lies in the new infrastructure of openLilyLib. Controlling the wealth of new styles with a simple interface is just what makes life continuously easier with LilyPond. I always pointed out “programmability” as a unique advantage of text based systems. You may want to have a look at the underlying programming work that makes all this possible, so you can read the source code. The beauty of LilyPond is that such elaborate functionality can be made available for the “end user” with such elegant simplicity as \useNotationFont Improviso

by Urs Liska at March 30, 2015 07:37 PM

Libre Music Production - Articles, Tutorials and News

Audacity 2.1.0 Released

Version 2.1.0 of Audacity, the free audio editor and recorder, was just released. Here are some of the improvements:

by Eduardo at March 30, 2015 01:14 PM

Create Digital Music » Linux

Free Audacity Audio Editor Gets Spectral Edits, Live Plug-ins

Spectral_03a

Dedicated wave editor Audacity has found enduring popularity, as a free and open source tool for working with sound. It runs on Linux, Windows, and OS X – with support for older Mac operating systems, which these days is sometimes tough to find. But just being free and open isn’t reason enough to use something, particularly when a lot of DAWs do a pretty decent job of wave editing.

This latest version of Audacity, 2.1.0, comes with some additions that might make it worth revisiting.

First, there’s spectral editing. In most software, audio editing is performed by time only. Here, you can drag over particular frequency ranges to select just those portions, for audio repair or simply highlighting certain portions of sonic content. That’s been available in some commercial tools, but it’s not normally found in DAWs and now you get it for free. See the spectral selection additions to the manual.

Second, you can now preview VST and Audio Unit effects (plus the open LADSPA format) in real-time. That’s useful for making Audacity an effect host, and can combine nicely with chains and batch processing. That is, you can preview effects live to adjust them (as you can do in a DAW) and then batch-process a bunch of sound (which your DAW can’t do easily). Plug-in hosting in general is improved, including the ability to work with multiple VST and add any effects to chains.

There’s also a new Noise Reduction effect.

Audacity still isn’t the prettiest software ever (ahem) – aesthetically and functionally, it seems the UI is due for a reboot. But I know it’s an important tool, especially for musicians on a budget. And this version is worth adding to your toolset.

Need another reason to use Audacity? How about the fact that the extreme time shifting capabilities of Paulstretch are built right in?

Check out the Audacity download page:
http://audacity.sourceforge.net/

(Manual links there are broken as I write this, so you can use my links above for that.)

Also worth considering is ocenaudio (note “ocen,” not “ocean”!):
http://www.ocenaudio.com.br/features

It isn’t as full-featured as Audacity – real-time effects preview is limited to VST, for instance, and the spectral view is not editable. It’s also free-as-in-beer; the code is closed. But the UI is substantially cleaner, and it has some nice features like multi-edit support. Thanks to Tom D in comments for the tip.

The post Free Audacity Audio Editor Gets Spectral Edits, Live Plug-ins appeared first on Create Digital Music.

by Peter Kirn at March 30, 2015 12:00 PM

Libre Music Production - Articles, Tutorials and News

Q-stuff release frenzy

Rui Nuno Capela has been on a pre-LAC2015 release frenzy with his Q-stuff suite of software. In the past week he has released updates for QjackCtl, Qsynth, Qsampler, QmidiNet, QmidiCtl and of course Qtractor. You can find full details about these releases over at rncbc.org.

by Conor at March 30, 2015 06:40 AM

March 29, 2015

rncbc.org

Qtractor 0.6.6 - The Lazy Tachyon is out!

And finally, for the wrap of the pre-LAC2015@JGU-Mainz release party, no other than the crown jewel of the whole Qstuff bunch ;)

Qtractor 0.6.6 (lazy tachyon beta) is out!

Release highlights:

  • LV2 and VST plugins GUI position persistence (NEW)
  • MIDI clip editor record/overdub note rendering (FIX)
  • VST plugin recursive discovery/search path (NEW)
  • VST-shell sub-plugins support (FIX)
  • also some old and new lurking bugs squashed.

Qtractor is an audio/MIDI multi-track sequencer application written in C++ with the Qt4 framework. Target platform is Linux, where the Jack Audio Connection Kit (JACK) for audio and the Advanced Linux Sound Architecture (ALSA) for MIDI are the main infrastructures to evolve as a fairly-featured Linux desktop audio workstation GUI, specially dedicated to the personal home-studio.

Flattr this

Website:

http://qtractor.sourceforge.net

Project page:

http://sourceforge.net/projects/qtractor

Downloads:

License:

Qtractor is free, open-source software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Change-log:

  • MIDI clip record/reopen to/from SMF format 0 has been fixed.
  • LV2 and VST plugins GUI editor widget position is preserved across hide/show cycles.
  • Added application description as freedesktop.org's AppData.
  • Added a "Don't ask this again" prompt option to zip/archive extrated directory removal/replace warning messages.
  • MIDI clip editor (aka. piano-roll) gets lingering notes properly shown while on record/overdubbing.
  • Current highlighted client/port connections are now drawn with thicker connector lines.
  • Fixing segfaults due to QClipboard::mimeData() returning an invalid null pointer while on Qt5 and Weston.
  • Return of an old hack/fix for some native VST plugins with GUI editor, on whether to skip the explicit shared library unloading on close and thus avoid some mysterious crashes on session and/or application exit.
  • Force reset of plugin selection list when any of the plugin search paths change (in View/Options.../Plugins/Paths).
  • Recursive VST plugin search is now in effect for inventory and discovery on path sub-directories (VST only).
  • Non-dummy scannig for regular VST, non-shell plugins, were doomed to infinite-loop freezes on discovery, now fixed.

Enjoy && keep the fun.

by rncbc at March 29, 2015 04:30 PM

Libre Music Production - Articles, Tutorials and News

Calf 0.0.60 Released!

The Calf team have just announced the release of version 0.0.60 of the very popular Calf Studio Gear plugins.

There are many bug fixes, new plugins as well as new features.

There are 16 new plugins included in this release, bringing the total number of Calf plugins up to 45. The new plugins are as follows -

by Conor at March 29, 2015 03:56 PM

New release of Giada Loop Machine

Version 0.9.5 of Giada Loop Machine has been released. This version, codename 'Nabla Symbols', has a number of refinements, as well as permanent MIDI mapping.

by Conor at March 29, 2015 03:42 PM

Recent changes to blog

J Hendrix Fuzz Face

Today is raining here, so I play around with our Ampsim toolkit.
Well, lets try to emulate the Fuzz Face of J. Hendrix, I said to myself.
First step is to create the schematic of the unit.
Here it is:

Fuzz Face

Now, create a (python) build script to generate faust source code
with our DK simulator in our Ampsim toolkit.

import os
from analog import *

schema = "Fuzzface2.sch"
path = "tmp"
module_id = "fuzzface"
mod = os.path.join(path, module_id+".so")

# create plugin
c1 = Circuit()
c1.plugindef = dk_simulator.PluginDef(module_id)
c1.plugindef.name = "Fuzz Face"
c1.plugindef.description = "J Hendrix Fuzz Face simulation"
c1.plugindef.category = "Distortion"
c1.plugindef.id = "fuzzface"
c1.set_module_id(module_id)
c1.read_gschem(schema)
c1.create_faust_module()

and yep, start guitarix and found the new plug in the Distortion category. Wow, sounds great. That will be enough for me to play with today. :D

Maybe the one or the other will try it, no problem, it's in the guitarix git repository allready.
And for those how use faust by themselves, here is the resulting faust source:

 // generated automatically
// DO NOT MODIFY!
declare id "fuzzface";
declare name "Fuzz Face";
declare category "Distortion";
declare description "J Hendrix Fuzz Face simulation";

import("filter.lib");

process = pre : iir((b0/a0,b1/a0,b2/a0,b3/a0,b4/a0,b5/a0),(a1/a0,a2/a0,a3/a0,a4/a0,a5/a0)) with {
    LogPot(a, x) = if(a, (exp(a * x) - 1) / (exp(a) - 1), x);
    Inverted(b, x) = if(b, 1 - x, x);
    s = 0.993;
    fs = float(SR);
    pre = _;

        Volume = 1.0 - vslider("Volume[name:Volume]", 0.5, 0, 1, 0.01) : Inverted(0) : LogPot(0) : smooth(s);

        Fuzz = 1.0 - vslider("Fuzz[name:Fuzz]", 0.5, 0, 1, 0.01) : Inverted(0) : LogPot(0) : smooth(s);

    b0 = Fuzz*(Fuzz*(Volume*pow(fs,3)*(4.76991513499346e-20*fs + 5.38351707988916e-15) + pow(fs,3)*(-4.76991513499346e-20*fs - 5.38351707988916e-15)) + Volume*pow(fs,3)*(-4.76991513499346e-20*fs + 5.00346713698171e-13) + pow(fs,3)*(4.76991513499346e-20*fs - 5.00346713698171e-13)) + Volume*pow(fs,2)*(-5.05730339185222e-13*fs - 1.16162215422261e-12) + pow(fs,2)*(5.05730339185222e-13*fs + 1.16162215422261e-12);

    b1 = Fuzz*(Fuzz*(Volume*pow(fs,3)*(-1.43097454049804e-19*fs - 5.38351707988916e-15) + pow(fs,3)*(1.43097454049804e-19*fs + 5.38351707988916e-15)) + Volume*pow(fs,3)*(1.43097454049804e-19*fs - 5.00346713698171e-13) + pow(fs,3)*(-1.43097454049804e-19*fs + 5.00346713698171e-13)) + Volume*pow(fs,2)*(5.05730339185222e-13*fs - 1.16162215422261e-12) + pow(fs,2)*(-5.05730339185222e-13*fs + 1.16162215422261e-12);

    b2 = Fuzz*(Fuzz*(Volume*pow(fs,3)*(9.53983026998693e-20*fs - 1.07670341597783e-14) + pow(fs,3)*(-9.53983026998693e-20*fs + 1.07670341597783e-14)) + Volume*pow(fs,3)*(-9.53983026998693e-20*fs - 1.00069342739634e-12) + pow(fs,3)*(9.53983026998693e-20*fs + 1.00069342739634e-12)) + Volume*pow(fs,2)*(1.01146067837044e-12*fs + 2.32324430844522e-12) + pow(fs,2)*(-1.01146067837044e-12*fs - 2.32324430844522e-12);

    b3 = Fuzz*(Fuzz*(Volume*pow(fs,3)*(9.53983026998693e-20*fs + 1.07670341597783e-14) + pow(fs,3)*(-9.53983026998693e-20*fs - 1.07670341597783e-14)) + Volume*pow(fs,3)*(-9.53983026998693e-20*fs + 1.00069342739634e-12) + pow(fs,3)*(9.53983026998693e-20*fs - 1.00069342739634e-12)) + Volume*pow(fs,2)*(-1.01146067837044e-12*fs + 2.32324430844522e-12) + pow(fs,2)*(1.01146067837044e-12*fs - 2.32324430844522e-12);

    b4 = Fuzz*(Fuzz*(Volume*pow(fs,3)*(-1.43097454049804e-19*fs + 5.38351707988916e-15) + pow(fs,3)*(1.43097454049804e-19*fs - 5.38351707988916e-15)) + Volume*pow(fs,3)*(1.43097454049804e-19*fs + 5.00346713698171e-13) + pow(fs,3)*(-1.43097454049804e-19*fs - 5.00346713698171e-13)) + Volume*pow(fs,2)*(-5.05730339185222e-13*fs - 1.16162215422261e-12) + pow(fs,2)*(5.05730339185222e-13*fs + 1.16162215422261e-12);

    b5 = Fuzz*(Fuzz*(Volume*pow(fs,3)*(4.76991513499346e-20*fs - 5.38351707988916e-15) + pow(fs,3)*(-4.76991513499346e-20*fs + 5.38351707988916e-15)) + Volume*pow(fs,3)*(-4.76991513499346e-20*fs - 5.00346713698171e-13) + pow(fs,3)*(4.76991513499346e-20*fs + 5.00346713698171e-13)) + Volume*pow(fs,2)*(5.05730339185222e-13*fs - 1.16162215422261e-12) + pow(fs,2)*(-5.05730339185222e-13*fs + 1.16162215422261e-12);

    a0 = Fuzz*(Fuzz*fs*(fs*(fs*(fs*(-3.73292075290073e-29*fs - 1.05633134620746e-20) - 3.11506369039915e-14) - 2.30719916990074e-11) - 1.07493164710329e-9) + fs*(fs*(fs*(fs*(3.73292075290073e-29*fs + 1.01643277726662e-20) + 2.91602352831988e-14) + 2.29636966370042e-11) + 1.07449105454163e-9)) + fs*(fs*(fs*(3.98985774247549e-22*fs + 1.99042653510896e-15) + 1.83615604104971e-13) + 5.31230624730483e-11) + 2.44402781742033e-9;

    a1 = Fuzz*(Fuzz*fs*(fs*(fs*(fs*(1.86646037645036e-28*fs + 3.16899403862238e-20) + 3.11506369039915e-14) - 2.30719916990074e-11) - 3.22479494130986e-9) + fs*(fs*(fs*(fs*(-1.86646037645036e-28*fs - 3.04929833179984e-20) - 2.91602352831988e-14) + 2.29636966370042e-11) + 3.22347316362488e-9)) + fs*(fs*(fs*(-1.19695732274265e-21*fs - 1.99042653510896e-15) + 1.83615604104971e-13) + 1.59369187419145e-10) + 1.22201390871017e-8;

    a2 = Fuzz*(Fuzz*fs*(fs*(fs*(fs*(-3.73292075290073e-28*fs - 2.11266269241492e-20) + 6.2301273807983e-14) + 4.61439833980148e-11) - 2.14986329420657e-9) + fs*(fs*(fs*(fs*(3.73292075290073e-28*fs + 2.03286555453323e-20) - 5.83204705663976e-14) - 4.59273932740084e-11) + 2.14898210908325e-9)) + fs*(fs*(fs*(7.97971548495099e-22*fs - 3.98085307021793e-15) - 3.67231208209942e-13) + 1.06246124946097e-10) + 2.44402781742033e-8;

    a3 = Fuzz*(Fuzz*fs*(fs*(fs*(fs*(3.73292075290073e-28*fs - 2.11266269241492e-20) - 6.2301273807983e-14) + 4.61439833980148e-11) + 2.14986329420657e-9) + fs*(fs*(fs*(fs*(-3.73292075290073e-28*fs + 2.03286555453323e-20) + 5.83204705663976e-14) - 4.59273932740084e-11) - 2.14898210908325e-9)) + fs*(fs*(fs*(7.97971548495099e-22*fs + 3.98085307021793e-15) - 3.67231208209942e-13) - 1.06246124946097e-10) + 2.44402781742033e-8;

    a4 = Fuzz*(Fuzz*fs*(fs*(fs*(fs*(-1.86646037645036e-28*fs + 3.16899403862238e-20) - 3.11506369039915e-14) - 2.30719916990074e-11) + 3.22479494130986e-9) + fs*(fs*(fs*(fs*(1.86646037645036e-28*fs - 3.04929833179984e-20) + 2.91602352831988e-14) + 2.29636966370042e-11) - 3.22347316362488e-9)) + fs*(fs*(fs*(-1.19695732274265e-21*fs + 1.99042653510896e-15) + 1.83615604104971e-13) - 1.59369187419145e-10) + 1.22201390871017e-8;

    a5 = Fuzz*(Fuzz*fs*(fs*(fs*(fs*(3.73292075290073e-29*fs - 1.05633134620746e-20) + 3.11506369039915e-14) - 2.30719916990074e-11) + 1.07493164710329e-9) + fs*(fs*(fs*(fs*(-3.73292075290073e-29*fs + 1.01643277726662e-20) - 2.91602352831988e-14) + 2.29636966370042e-11) - 1.07449105454163e-9)) + fs*(fs*(fs*(3.98985774247549e-22*fs - 1.99042653510896e-15) + 1.83615604104971e-13) - 5.31230624730483e-11) + 2.44402781742033e-9;
};

by brummer at March 29, 2015 01:12 PM

March 28, 2015

linux.autostatic.com » linux.autostatic.com

Wolfson Audio Card for Raspberry Pi

Just ordered a Wolfson Audio Card for Raspberry Pi via RaspberryStore. I asked them about this audio interface at their stand during the NLLGG meeting where I did a presentation about doing real-time audio with the RPi and they told me they would ship it as soon as it would become available. They kept their word so I'm hoping to mount this buddy on my RPi this very week. Hopefully it will be an improvement and allow me to achieve low latencies with a more stable RPi so that I can use it in more critical environments (think live on stage). It has a mic in so I can probably set up the RPi with the Wolfson card quite easily as a guitar pedal. Just a pot after the line output, stick it in a Hammond case, put guitarix on it and rock on.


Wolfson Audio Card for Raspberry Pi

by Jeremy at March 28, 2015 05:18 PM

March 27, 2015

Libre Music Production - Articles, Tutorials and News

LAC 2015 program

This years Linux Audio Conference will be getting underway in just under two weeks time. The conference runs from April 9th to 12th and will be taking place in Mainz, Germany.

You can now check out the program for the conference over at the LAC website. Don't forget, if you have any questions, you can hop onto the the LAC2015 IRC channel.

by Conor at March 27, 2015 08:58 AM

March 26, 2015

Libre Music Production - Articles, Tutorials and News

Musescore 2.0 Released

After over four years of development and a lot of work from over 400 contributors, MuseScore 2.0 is finally available! You can download it from musescore.org.

by Eduardo at March 26, 2015 03:15 PM

Hackaday » digital audio hacks

SNES Headphones Cry for Bluetooth Has Been Answered

A year and a half ago we ran a post about a SNES controller modified into a pair of headphones. They were certainly nice looking and creative headphones but the buttons, although present, were not functional. The title of the original post was (maybe antagonistically) called: ‘SNES Headphones Scream Out For Bluetooth Control‘.

Well, headphone modder [lyberty5] is back with a vengeance. He has heeded the call by building revision 2 of his SNES headphones… and guess what, they are indeed Bluetooth! Not only that, the A, B, X and Y buttons are functional this time around and have been wired up to the controls on the donor Bluetooth module.

To get this project started, the SNES controller was taken apart and the plastic housing was cut up to separate the two rounded sides. A cardboard form was glued in place so that epoxy putty could be roughly formed in order to make each part completely round. Once cured, the putty was sanded and imperfections filled with auto body filler. Holes were drilled for mounting to the headband and a slot was made for the Bluetooth modules’ USB port so the headphone can be charged. The headphones were then reassembled after a quick coat of paint in Nintendo Grey. We must say that these things look great.

If you’d like to make your own set of SNES Bluetooth Headphones, check out the build video after the break.


Filed under: digital audio hacks, nintendo hacks

by Rich Bremer at March 26, 2015 08:01 AM

March 25, 2015

rncbc.org

QjackCtl 0.3.13, Qsynth 0.3.9, Qsampler 0.3.0 released!

The pre-LAC2015 release frenzy continues... ;)

Now for the next batch...

QjackCtl - A JACK Audio Connection Kit Qt GUI Interface

QjackCtl 0.3.13 is out.

QjackCtl is a simple Qt application to control the JACK sound server, for the Linux Audio infrastructure.

website:
http://qjackctl.sourceforge.net
downloads:
http://sourceforge.net/projects/qjackctl/files

Change-log:

  • Added application description as freedesktop.org's AppData.
  • Setup dialog form is now modeless.
  • Introducing brand new active patchbay reset/disconnect-all user preference option.
  • Current highlighted client/port connections are now drawn with thicker connector lines.
  • New user preference option on whether to show the nagging 'program will keep running in the system tray' message, on main window close.
  • Connections lines now drawn with anti-aliasing; connections splitter handles width is now reduced.
  • Drop missing or non-existent patchbay definition files from the most recent used list.

Flattr this


Qsynth - A FluidSynth Qt GUI Interface

Qsynth 0.3.9 is out.

Qsynth is a FluidSynth GUI front-end application written in C++ around the Qt4 toolkit using Qt Designer.

website:
http://qsynth.sourceforge.net
downloads:
http://sourceforge.net/projects/qsynth/files

Change-log:

  • Added application description as freedesktop.org's AppData.
  • New user preference option on whether to show the nagging 'program will keep running in the system tray' message, on main window close.
  • Application close confirm warning is now raising the main window as visible and active for due top level display, especially applicable when minimized to the system tray.
  • A man page has been added.
  • Translations install directory change.
  • Allow the build system to include an user specified LDFLAGS.
  • Czech (cs) translation updated (by Pavel Fric, thanks).

Flattr this


Qsampler - A LinuxSampler Qt GUI Interface

Qsampler 0.3.0 is out.

Qsampler is a LinuxSampler GUI front-end application written in C++ around the Qt4 toolkit using Qt Designer.

website:
http://qsampler.sourceforge.net
downloads:
http://sourceforge.net/projects/qsampler/files

Change-log:

  • Added application description as freedesktop.org's AppData.
  • Added this "Don't ask/show this again" option to some if not most of the nagging warning/error message boxes.
  • Mac OS X: Fixed default path of linuxsampler binary.
  • When closing qsampler and showing the user the dialog whether to stop the LinuxSampler backend, set the default selection to "Yes".
  • Master volume slider now getting proper layout when changing its main toolbar orientation.
  • Messages standard output capture has been slightly improved as for non-blocking i/o, whenever available.
  • Adjusted configure check for libgig to detect the new --includedir.
  • A man page has beed added (making up Matt Flax's work on debian, thanks).
  • Translations install directory change.
  • Added support for SF2 instrument names/preset enumeration.
  • Added instrument list popup on channel strip which shows up when the instrument name is clicked. Allows faster switching among instruments of the same file.
  • Adjusted configure check for libgig to detect its new --libdir (impolitely forcing the user now to have at least libgig 3.3.0).

Flattr this

License:

QjackCtl, Qsynth and Qsampler are free, open-source software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Enjoy && have some fun!

by rncbc at March 25, 2015 06:30 PM

Hackaday » digital audio hacks

Logic Noise: Filters and Drums

Filters and Drums

Logic Noise is an exploration of building raw synthesizers with CMOS logic chips. This session, we continue to abuse the 4069UB as an amplifier. We’ll turn the simple unity-gain buffer of last session into a single-pole active lowpass filter with a single part. (Spoiler: it’s a capacitor.)

While totally useful, this simple filter is a bit boring and difficult to make dynamic. So we’ll look into an entirely different filter, the Twin-T notch filter, that turns out to be sharp enough to build a sine-wave oscillator on, and tweakable enough that we’ll make a damped-oscillator drum sound out of it.

Here’s a quick demo of where we’re heading. Read on to see how we get there.

Filters

Last session, we built an amplifier and played around with the gain: the ratio of how much voltage swing is output relative to how much is input. An active filter is an amplifier where this gain depends on the frequency of the incoming signal. This lets us carve out different frequency ranges that we’d either like more or less of. (In general, though, you don’t need an amplifier to filter. See passive filters versus active filters.)

When you pluck a string on a guitar, for instance, all sorts of frequencies are produced. But over time the string vibrations are damped out by the wood that the guitar is made of, and within a half-second or so, most of the vibrations left are related to the string’s fundamental vibrational frequency (determined by where your finger is on the frets). The higher frequency vibrations are the first to go. This suggests a sound synthesis strategy to make “natural” sounding instruments: generate all sorts of frequencies and then filter out the higher ones.

Single-pole Lowpass Filter

Given that we’ve already made a few simple amplifier circuits last time, it’s a quick step to understand the simplest of all filters: the single-pole filter. Here’s the circuit diagram:

filter.sch

Yeah, that’s an added capacitor. That’s all there is to it. But have a listen to the difference:

Remember the intuition about the negative-feedback amplifier from last time. We had two resistors, one between the input and the 4069, and the other in feedback between the (inverted) output and the input. When the input voltage wiggled around the 4069’s neutral voltage, the output wiggled in the opposite direction. And the ratio of the voltage swings, the gain, depends on how hard the feedback path has to work to cancel out the incoming signal current.

The same intuition works for the filter, as long as you understand one thing about capacitors. Capacitors pass current through them only reluctantly. The amount of current a capacitor passes before it’s “charged up” to a voltage that resists any further incoming current is referred to as its capacitance. Or, in electro-math: C = Q/V or V = Q/C, where Q is the charge on the capacitor, which is also the current (charge per second) summed up over time.

In short, the more charge you put into a capacitor, the higher voltage it develops to resist putting more charge into it. And how quickly this voltage ramps up is proportional to one over the capacitance and directly proportional to the current passing through.

For us, this means that it’s easy to pass a given current through a capacitor for a short while, harder to pass the same current through for a longer time, and impossible to get current through forever without increasing the input voltage to overcome the capacitor’s “charged-up” voltage. Or put another way: capacitors let high frequency current vibrations through easily, resist middle frequencies, and deny constant-voltage direct current.

So what happens when we put a capacitor in the feedback path of our unity-gain feedback amplifier? Since the capacitor nearly blocks very low frequencies, all of them have to pass through the resistor, and we get unity gain. As we increase the frequency, some of the current starts to pass through the capacitor and the total feedback resistance is lowered. This means that the output has an easier job cancelling out the input, and thus less gain at middle frequencies. At very high frequencies, the capacitor will pass currents so easily that almost none will even need to go through the resistor, and the gain drops even lower.

Put more succinctly, the capacitor resists lower frequencies more than higher ones. In a negative feedback amplifier, output gain increases when it’s harder to push current through the feedback path. So by putting a capacitor in the feedback path, we make an amplifier with more gain in the low frequencies and less gain for higher frequencies. Voila, a lowpass filter!

Variable Cutoff Lowpass Filter

What if we want to vary the cutoff frequency? In math, the cutoff frequency for a single pole lowpass filter like this is 1/(2 * pi * R * C). Practically, we can vary the cutoff frequency by changing the capacitor or by changing the input current through the resistor. So we’ll set the basic range by picking a capacitor value and vary the filter’s frequency response by turning a potentiometer. For the circuit here, the cutoff frequency ranges from 160 Hz at 100k ohms to 1600 Hz at 10k ohms.

But there’s one catch with varying the input resistor; we also change the overall gain which depends on the ratio of feedback resistor to input resistor. So if you’re going to be changing the frequency response by changing the input resistor a lot, you might also want to change the feedback resistor at the same time to track it, holding the overall (passband) gain roughly constant. For that, you’ll need a stereo / dual potentiometer, which is simply two potentiometers linked to the same shaft. With one knob, you control two identical resistors.

Before we leave the single pole filter, you can convert the lowpass filter here into a highpass filter simply by moving the capacitor out from the feedback loop and sticking it in front of the input resistor. Give it a shot!

Twin-T Filter

Our story gets significantly more interesting if we toss a more complicated filtering element in the feedback path, and one of our favorite filters is the Twin T. Instead of being a lowpass filter like the one above, the Twin T is a notch filter. Notch filters pass both high and low frequencies, but are tuned to knock out a particular frequency in the middle.

In its raw form, the Twin T filter is fairly useful for killing a specific nuisance frequency in a signal. Maybe you want to knock out power line noise (60Hz in the USA, 50Hz in Europe). Toss a Twin T filter that’s tuned to 60Hz into the chain, and you’ll get rid of most of the noise without damping down the rest of your signal very much. To see why it’s called a Twin T, have a look at the circuit diagram:

twint.sch

The Twin T works by combining two signal paths, each one T-shaped. The T with two resistors and a capacitor to ground is a simple lowpass filter, essentially a passive version of the one we made above. The other T with the series capacitors and resistor to ground is a highpass filter.

Highpass and lowpass sounds like everything should get through, right? Yes, but. At the frequency that the filter is tuned for (the “cutoff” frequency) the two outputs are exactly 90 degrees out of phase from the input, but in opposite directions. In theory, if both Ts are tuned to the same frequency the two paths exactly cancel each other out at the cutoff frequency and none that cutoff frequency makes it through at all. In reality, you can actually get the two branches fairly close to each other and get very good, but not perfect, cancellation of the tuned frequency.

What happens when we put a Twin T filter into the feedback path of an amplifier? Remember that the negative feedback logic requires the output to create more voltage the harder it is to push current back through the feedback path. So instead of knocking out the frequency that the filter is tuned to, we get that one particular frequency amplified. If there’s a little bit of noise entering the input at our tuned frequency, it’ll get amplified a lot and all of the other frequencies will get attenuated. And suddenly you’ve got a sine-wave oscillator.

Drums

Which brings us to today’s killer circuit, and a little bit of a refinement on the above explanation. The short version is that we detune the Twin T filter a little bit so that it only rings when it’s given an impulse and then dies out.

First let’s play a little bit and build up the Twin-T and 4069UB amplifier part of the circuit. It’s just the Twin-T filter from above set up in the feedback path of a 4069UB inverter stage, and then sent out directly through another 4069UB inverter as a buffer. It’s overdriven and you’ll hear the clicks of the trigger bleeding through, but it’s a start.

drums_simple.sch

Refinements

With the basic circuit working, let’s expand on it in two different ways. First, we’ll drive the drum with another oscillator circuit. Then, we’ll pass the audio out through a lowpass filter to knock off some of the trigger pulse bleedthrough.

Here’s the final circuit:

drums.sch

Starting on the left, we have a very low frequency oscillator set up on the 40106 and buffered using another 40106 stage. This simply puts out a nice reliable square wave. The signal then passes through a capacitor, which again has the effect of letting only the higher frequencies pass through. What makes it through looks basically like a quick pulse (in green).

drum_square_to_trigger

The trigger signal pulse is inserted into the feedback loop of the Twin T. It’s actually not crucial where you attach the trigger, but it’ll couple less with the Twin T section if you connect it here.

And finally, we’ll pass the signal through a lowpass filter to remove the clicky noise that comes from the raw trigger signal feeding through to the output.

Range

What values should we use for capacitors and resistors? Try to pick the component values so that the single capacitor in the lowpass T is twice as large as the two capacitors (2 C) and the single resistor is half as large as the paired resistors (1/2 R). This makes both Ts tune to the same frequency, given again by 1/(2*pi*R*C) where R and C are the values of the paired resistors and capacitors respectively.

In practice, try to get factor-of-two capacitors and leave the resistors adjustable wherever possible. Since we’ll be de-tuning the circuit on purpose to make the oscillations die out slowly, there’s not a reliable formula for the resistances. You’ll just have to pick capacitors and tweak the knobs until it works. That said, if you find you want frequencies outside of the range that you’re currently getting, don’t hesitate to swap out the capacitors.

Tweaking and Tuning

Detuning the Twin-T section is the secret to making this circuit work as a drum rather than as a sine-wave oscillator, and the approach you’ll have to take is a bit experimental, so let’s talk about tuning this circuit. If you align the two halves of the Twin T perfectly, as we mentioned before, only the one single frequency will be blocked, and thus only that one frequency will be amplified by the negative feedback circuit. You’ll get a very nice sine wave oscillator, but not drums.

If you detune the two halves of the Twin T from each other, especially if you do so by raising the cutoff frequency of the highpass filter so that it’s higher than the lowpass filter, a wider and wider band of frequencies are blocked by the Twin T, and thus receive the extra gain from the amplifier.

But as you spread the gain over a wider and wider band of frequencies, you get less gain at any given frequency. As you continue to detune the Ts from each other, you’ll reach a point where the circuit no longer amplifies any single frequency enough to oscillate indefinitely by itself. However, and this is the key here, the filter will oscillate for a while if you provide it with a strong enough impulse signal. And that’s exactly what we’re doing with the square wave coupled through the capacitor coming from the tempo oscillator. It’s nice to watch the damped waveforms on a scope if you’ve got one.

drum_trigger_and_pulse

So here’s a procedure for getting close to your desired sound. To enable oscillation over a wide range of frequencies, set the decay potentiometer as low as it will go. This sets the highpass leg of the T to a very high cutoff frequency, which means that it’s passing nearly nothing. This frees up the lowpass T section to determine the pitch, and for most of the tuning potentiometer’s range you’ll get oscillations. Pick the rough pitch you want by listening to the oscillator. Now you can tune up the decay pot until the oscillations are just damped out and you’ll be set.

But notice that the two potentiometers influence each other a little bit. That’s because the two legs of the T are simply electrically connected. So as you increase the decay to go from oscillator to drum, be ready to also tweak the frequency potentiometer to keep the drum tone at your desired pitch and decay rate.

Extensions

If you’re interested in exploring more active filter designs than just the single pole lowpass shown here, have a look at Rod Elliott’s great writeup on active filters. You can either break down and use op-amps and dual power supplies, or you can keep hacking and replace any of the op-amps in his circuits with a 4069UB stage as long as they only use negative feedback and have the op-amp’s positive terminal connected to ground. In particular, have a look at the multiple feedback topology and the biquad.

If you don’t need synth drums, you can simply tune the Twin T up and use the circuit as a sine wave oscillator. For a single set of capacitors it’s not very widely adjustable, but if all you need is a single frequency you can pick the right capacitors and you’re set. It’s not the best sine wave oscillator out there, but it’s hard to beat a one-chip build with a few passive components thrown in.

But don’t take our word for it: here’s a scope shot. The yellow line is the produced sine wave, and the purple is a FFT of the signal. Vertical bars are 20dBV, or a factor of ten. The first peak, at 150Hz is our sine wave, and the second peak is down in voltage by about a factor of 100. It’s not lab equipment, but it’s pretty solid for the abuse of a CMOS logic chip.

scope_0

And there’s nothing stopping you from feeding the circuit with audio-frequency trigger pulses if you want to freak out. The result is very similar to the sync oscillator we built before but it’s a lot mellower because the waveforms involved are fundamentally sine waves here.

Have fun!


Filed under: digital audio hacks, Featured, musical hacks

by Elliot Williams at March 25, 2015 02:01 PM

Scores of Beauty

Schemellis Gesangbuch

This month, March 2015, marks J.S.Bachs 330th birthday. For the occasion, the Pirate-Fugues team has published a new edition of 4-voiced transcriptions of the songs from Schemellis Musicalisches Gesang-Buch, BWV 439–507. LilyPond is among the tools in our production pipeline.

Some of the arias in Georg Christian Schemellis song book are fairly well known, for instance: Ich steh an deiner Krippe hier, BWV 469, and Komm süsser Tod, BWV 478.

Each original score from the collection consists of 2 voices:

  • a soprano voice with lyrics, and
  • a bass voice with Generalbass notation.

Here is an example: The first few measures of Mein Jesu, was für Seelenweh, BWV 487

bwv0487input

In order to create a 4-voiced transcription, we add 2 voices in between the 2 existing ones. The resulting score could look something like

bwv0487output

Transcriptions of these songs already exist. So what is special about our edition? Our goal was to create the 4-voiced transcriptions as faithful as possible to J.S.Bach’s own musical style. And, we want the computer to help us do it. Our composition approach is data-driven: Our custom made software harvests patterns from over 1700+ digitized scores by J.S.Bach.

The process is not fully automated, and we don’t think this is desirable anyways. Instead, the software computes between 10–30000 suggestions of up to 3 measures in duration. The suggestions are readily sorted according to intuitive mathematical criteria such as

  • voice coverage,
  • number of notes,
  • frequency of note constellations in database.

These and other categories allow the user to filter and narrow down the numerous possible insertions in a convenient and meaningful way.

The creative process usually takes 15–45 minutes for an entire song and requires a lot of user interaction. The video is only a summary to illustrate what the computed suggestions look like for the song BWV 487 already introduced above:

Note that, the sequential start-to-finish fashion is only to make the video align with the music. During the composition phase, the user can choose to edit the score in any order.

Before we elaborate on the role of LilyPond in our publication, we wrap up the description of the project:

Our software has a unique set of requirements:

  • the music notation (as shown in the video) requires precise control over the note placement in order to prevent jerkiness when browsing the suggestions;
  • extra information is drawn into the score: selected pitch range for computation, available pitches in the suggestions;
  • user interaction with the mouse filters and narrows down the suggestions.

No prior API was available to perform these tasks. So instead, we developed our own and called it The Pirate Fugues.

The audio for the collection of 69 songs is synthesized using 3rd party software Pianoteq, Ivory II, and Hauptwerk (all trademarked!, and to none of which we are affiliated). For each song, we provide an animation that visualizes the suggestions by our software and indicate the local correlation of the final score to the database. The website of our project is http://djtascha.de/schemellis-gesangbuch/ where you can listen to the results, download the sheet music, and find additional information on the technique.

Disclaimer: Faithful to J.S.Bach’s style is a bold claim and one that invariably sparks controversy. Although we have taken great care in compiling each score in the collection, there is room for improvement. Apart from creating the music, another objective of the project was to learn about the strengths and weaknesses of the software. Independent of your background in music, feel free to let us know what you think. Thank you!

Now, back to LilyPond:

We have introduced LilyPond to our workflow about 2 years ago. From Lilypond, we have adapted

  • the chord notation,
  • the ornament labelling and graphics, as well as
  • the Mensur note apparel.

Since then, all scores from our projects are algorithmically exported LilyPond for on-screen preview, and ready-to-print pdfs. We are not aware of any alternative to LilyPond that is as convenient and yields results of the same visual quality.

In the future, we hope that notation software like LilyPond will be able to imitate the handwriting of famous composers such as J.S.Bach.

by datahaki at March 25, 2015 08:01 AM

March 23, 2015

rncbc.org

QmidiNet 0.2.1, QmidiCtl 0.2.0 released!

The pre-LAC2015 pre-season has just been started!

Here goes the first batch... ;)

QmidiNet - A MIDI Network Gateway via UDP/IP Multicast

QmidiNet 0.2.1 released!

QmidiNet is a MIDI network gateway application that sends and receives MIDI data (ALSA-MIDI and JACK-MIDI) over the network, using UDP/IP multicast. Inspired by multimidicast and designed to be compatible with ipMIDI for Windows.

Website:

http://qmidinet.sourceforge.net

Project page:

http://sourceforge.net/projects/qmidinet

Downloads:

http://sourceforge.net/projects/qmidinet/files

Flattr this

QmidiCtl - A MIDI Remote Controller via UDP/IP Multicast

QmidiCtl 0.2.0 released!

QmidiCtl is a MIDI remote controller application that sends MIDI data over the network, using UDP/IP multicast. Inspired by multimidicast (http://llg.cubic.org/tools) and designed to be compatible with ipMIDI for Windows (http://nerds.de). QmidiCtl has been primarily designed for the Maemo enabled handheld devices, namely the Nokia N900 and also being promoted to the Maemo Package repositories. Nevertheless, QmidiCtl may still be found effective as a regular desktop application as well.

Website:

http://qmidictl.sourceforge.net

Project page:

http://sourceforge.net/projects/qmidictl

Downloads:

http://sourceforge.net/projects/qmidictl/files

Flattr this

Change-log:

  • Reset (to network defaults) button added to options dialog, which also gets some layout reform.
  • Added application description as freedesktop.org's AppData.
  • Previously hard-coded UDP/IP multicast address (225.0.0.37) is now an user configurable option.
  • A man page has beed added. (QmidiCtl backlog)
  • Allow the build system to include an user specified LDFLAGS. (QmidiCtl backlog)

License:

Both QmidiNet and QmidiCtl are free, open-source software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Cheers && Enjoy!

by rncbc at March 23, 2015 06:30 PM

March 21, 2015

Libre Music Production - Articles, Tutorials and News

Libre Music Production tutorial features in Linux Format!

Libre Music Production's tutorial, "Ultimate Guide to Getting Started With Guitarix" is featured in this month's issue of Linux Format, out now! (Issue 196, April 2015)

This is our second time to feature in Linux Format. We'd like to again thank Neil Mohr and all at Linux Format for making this happen!

by Conor at March 21, 2015 06:48 PM

March 20, 2015

Hackaday » digital audio hacks

Auto-sleep Hacked in PC Speakers

We can commiserate with [HardwareCoder] who would rather not leave his PC speakers on all the time. The Creative T20 set that he uses turn off when you turn the volume knob all the way down until it clicks. So shutting them off means repositioning the volume each time they’re switched on again. This hack kills two birds with one stone by turning on and off automatically without touching that knob.

The system is based around an ATtiny45 and a few other simple components. It uses two ADCs to monitor the rear input channels of the PC speakers. If no sound is detected for more than one minute, the shutdown pin of the speakers’ amp chip is triggered. That’s not quite where the hack ends. We mentioned it monitors the rear input of the speakers, but it doesn’t monitor the front AUX input. An additional push button is used to disable the auto-sleep when using this front input. There is also a fancy PWM-based heartbeat on an LED when the speakers are sleeping.

[HardwareCoder] was worried that we wouldn’t be interested in this since it’s quite similar to a hack we ran a few years ago. We hope you’ll agree it’s worth another look. He also warned us that the demo video was boring. We watched it all anyway and can confirm that there’s not much action there but we embedded it below anyway.


Filed under: ATtiny Hacks, digital audio hacks, peripherals hacks

by Mike Szczys at March 20, 2015 05:00 AM

March 18, 2015

Libre Music Production - Articles, Tutorials and News

Libre Music Production Workshop in Barcelona

Andrés Pérez López is planning a workshop in Barcelona promoting free and open source software for making music. It will also feature some material published here on Libre Music Production. The workshop is planned to take place in CC Convent de Sant Agustí from 22nd April to 20th May. The program for the workshop is as follows -

by Conor at March 18, 2015 05:08 PM