planet.linuxaudio.org

May 28, 2016

A touch of music

Modeling rhythms using numbers - part 2

This is a continuation of my previous post on modeling rhythms using numbers.

Euclidean rhythms

The Euclidean Rhythm in music was discovered by Godfried Toussaint in 2004 and is described in a 2005 paper "The Euclidean Algorithm Generates Traditional Musical Rhythms". The greatest common divisor of two numbers is used rhythmically giving the number of beats and silences, generating the majority of important World Music rhythms.

Do it yourself

You can play with a slightly generalized version of euclidean rhythms in your browser  using a p5js based sketch I made to test my understanding of the algorithms involved. If it doesn't work in your preferred browser, retry with google chrome.  

The code

The code may still evolve in the future. There are some possibilities not explored yet (e.g. using ternary number systems instead of binary to drive 3 sounds per circle). You can download the full code for the p5js sketch on github

screenshot of the p5js sketch running. click the image to enlarge

The theory

So what does it do and how does it work? Each wheel contains a number of smaller circles. Each small circle represents a beat. With the length slider you decide how many beats are present on a wheel.  

Some beats are colored dark gray (these can be seen as strong beats), whereas other beats are colored white (weak beats). To strong and weak beats one can assign a different instrument. The target pattern length decides how many weak beats exist between the strong beats. Of course it's not always possible to honor this request: in a cycle with a length of 5 beats and a target pattern length of 3 beats (left wheel in the screenshot) we will have a phrase of 3 beats that conforms to the target pattern length, and a phrase consisting of the 2 remaining beats that make a "best effort" to comply to the target pattern length. 

Technically this is accomplished by running Euclid's algorithm. This algorithm is normally used to calculate the greatest common divisor between two numbers, but here we are mostly interesting in the intermediate results of the algorithm. In Euclid's algorithm, to calculate the greatest common divisor between an integer m and a smaller integer n, the smaller number n is repeatedly subtracted from the greater until the greater is zero or becomes smaller than the smaller, in which case it is called the remainder. This remainder is then repeatedly subtracted from the smaller number to obtain a new remainder. This process is continued until the remainder is zero. When that happens, the corresponding smaller number is the greatest common divisor between the original two numbers n and m.

Let's try it out on the situation of the left wheel in the screenshot. The greater number m is 5 (length) and the smaller number n is 3 (target pattern length). Now the recipe says to repeatedly subtract 3 from 5 until you get something smaller than 3. We can do this exactly once:

5 - (1).3 = 2

We can rewrite this as:

5 = (1).3 + 2

This we can interpret as: the cycle of 5 beats is to be decomposed as 1 phrase with 3 beats, followed by a phrase with 2 beats (the remainder). Each phrase consists of a single strong beat followed by all weak beats. In a symbolic representation easier read by musicians one might write: x..x. (In the notation of the previous part of this article one could also write 10010).

Euclid's algorithm doesn't stop here. Now we have to repeatedly subtract the remainder 2 from the smaller number 3:

3 = (1).2 + 1

This in turn can be read as: the phrase of 3 beats can be further decomposed as 1 phrase of 2 beats followed by a phrase consisting of 1 beat. In a symbolic representation: x.x Euclid continues:

2 = (2).1 + 0

The phrase of two beats can be represented symbolically as: xx. We've reached remainder 0 and Euclid stops: apparently the greatest common divisor between 5 and 3 is 1.

Now it's time to realize what we really did: 
  • We decomposed a phrase of 5 beats in a phrase of 3 beats and a phrase of 2 beats making a rhythm x..x. 
  • Then we further decomposed the phrase of 3 beats into a phrase of 2 beats followed by a phrase of 1 beat. 
  • We can substitute this refined 3 beat phrase in our original rhythm of 5 = 3+2 beats to get a rhythm consisting of 5 = (2 + 1) + 2 beats: x.xx. 
  • I hope it's clear by now that by choosing how long to continue using Euclid's algorithm, we can decide how fine-grained we want our rhythms to become. 
  • This is where the max pattern length slider comes into play. 
The length slider and the target pattern slider will determine a rough division between strong and weak beats by running Euclid's algorithm just once, whereas the max pattern length slider helps you decide how long to carry on Euclid's algorithm to further refine the generated rhythm.


by Stefaan Himpe (noreply@blogger.com) at May 28, 2016 02:22 PM

May 24, 2016

digital audio hacks – Hackaday

Secret Listening to Elevator Music

While we don’t think this qualifies as a “fail”, it’s certainly not a triumph. But that’s what happens when you notice something funny and start to investigate: if you’re lucky, it ends with “Eureka!”, but most of the time it’s just “oh”. Still, it’s good to record the “ohs”.

Gökberk [gkbrk] Yaltıraklı was staying in a hotel long enough that he got bored and started snooping around the network, like you do. Breaking out Wireshark, he noticed a lot of UDP traffic on a nonstandard port, so he thought he’d have a look.

A couple of quick Python scripts later, he had downloaded a number of the sample packets and decoded them into hex and found the signature for LAME, an MP3 encoder. He played around with byte offsets until he got a valid MP3 file out, and voilà, the fantastic reveal! It was the hotel’s elevator music stream — that he could hear outside in the corridor with much less effort. (Sad trombone.)

But just because nothing came up this time doesn’t mean that nothing will come up next time. And it’s important to keep your skills sharp for when you really need them. We love following along with peoples’ reverse engineering efforts, whether or not they end up finding anything. What oddball signals have you found lately?

Thanks [leonardo] for the tip! Wireshark graphic from Softpedia’s entry on Wireshark. Simulated-phosphor audio display by Oona [windytan] Räisänen (check that out!).


Filed under: digital audio hacks, security hacks, slider

by Elliot Williams at May 24, 2016 08:01 AM

May 22, 2016

aubio

Install aubio with pip

You can now install aubio's python module using pip:

$ pip install git+git://git.aubio.org/git/aubio

This should work for Python 2.x and Python 3.x, on Linux, Mac, and Windows. Pypy support is on its way.

May 22, 2016 01:00 PM

May 17, 2016

OSM podcast

May 14, 2016

Libre Music Production - Articles, Tutorials and News

EMAP - a GUI for Fluidsynth

EMAP - a GUI for Fluidsynth

EMAP (Easy Midi Audio Production) is a graphical user interface for the Fluidsynth soundfont synthesizer. It functions as a Jack compatible:

by admin at May 14, 2016 04:12 PM

May 11, 2016

Pid Eins

CfP is now open

The systemd.conf 2016 Call for Participation is Now Open!

We’d like to invite presentation and workshop proposals for systemd.conf 2016!

The conference will consist of three parts:

  • One day of workshops, consisting of in-depth (2-3hr) training and learning-by-doing sessions (Sept. 28th)
  • Two days of regular talks (Sept. 29th-30th)
  • One day of hackfest (Oct. 1st)

We are now accepting submissions for the first three days: proposals for workshops, training sessions and regular talks. In particular, we are looking for sessions including, but not limited to, the following topics:

  • Use Cases: systemd in today’s and tomorrow’s devices and applications
  • systemd and containers, in the cloud and on servers
  • systemd in distributions
  • Embedded systemd and in IoT
  • systemd on the desktop
  • Networking with systemd
  • … and everything else related to systemd

Please submit your proposals by August 1st, 2016. Notification of acceptance will be sent out 1-2 weeks later.

If submitting a workshop proposal please contact the organizers for more details.

To submit a talk, please visit our CfP submission page.

For further information on systemd.conf 2016, please visit our conference web site.

by Lennart Poettering at May 11, 2016 10:00 PM

May 10, 2016

Linux – cdm createdigitalmusic

Trigger effects in Bitwig with MIDI, for free

In the latest chapter of “people on the Internet doing cool things for electronic music,” here’s a creation by Polarity. It lets you rapidly trigger effects parameters via MIDI. And if you’re a Bitwig Studio enthusiast, it’s available for free.

Clever stuff. YouTube has the download link and instructions.

Polarity, based in Berlin, describes himself thusly:

Hi i´m Polarity and do music at home in my small bedroom studio. I record regularly sessions and publish them here. I also broadcast live on twitch from time to time.

Hallo ich heiße Polarity und mache Musik hier in Berlin in meinem kleinen Schlafzimmer. Ich zeichne regelmässig Sessions auf und veröffentliche sie hier. Wer möchte kann das auch live auf Twitch verfolgen, wo ich öfters Live sende!

(Ah, I was wondering when I’d run into someone using Twitch – the live streaming service used largely by gamers – for music.)

More:
Twitch.tv: http://www.twitch.tv/polarity_berlin
Soundcloud: https://soundcloud.com/polarity

It’s an interesting form of promotion – give musicians something they can use. And if that’s where music is headed, maybe that’s not a bad thing. It means the means of making music will spread along with musical ideas, which in the connected online village now worldwide seems a positive.

The post Trigger effects in Bitwig with MIDI, for free appeared first on cdm createdigitalmusic.

by Peter Kirn at May 10, 2016 09:29 PM

May 06, 2016

KXStudio News

Changes in KXStudio repositories

Hey everyone, just a small heads up about the KXStudio repositories.

If you use Debian Testing or the new Ubuntu 16.04 you probably saw some warnings regarding weak SHA1 keys when checking for updates.
We're aware of this issue and a fix is coming soon, but it will require some changes in the repositories.

First, we'll get rid of the 'lucid' builds and rebuild all of them in the 'trusty' series.
For those of you that were using Debian 6 or something older than Ubuntu 14.04, the repositories will stop working for you later this month.

Second, the gcc5 specific packages will be migrated from 'wily' series to 'xenial'.
This means you'll no longer be able to use the KXStudio repositories if you're running Ubuntu 15.10.
If that's the case for you, please update to 16.04 as soon as possible. Note that 15.10 will be officially end-of-life in 2 months.

And finally, the gcc5 packages will begin using Qt5 instead of Qt4 for some applications.
This will include Carla, Qtractor and the v1 series plugins.
Hopefully this won't break anything, but if it does please let us know.

That's it for now. Have a nice weekend!

by falkTX at May 06, 2016 10:00 AM

May 05, 2016

News – Ubuntu Studio

Help Us Put Some Polish on Ubuntu Studio

We are proud to have Ubuntu Studio 16.04 out in the wild. And the next release can and should be better. It WILL be better if you help! Are there specific packages that should be included or removed? Are there features you would like to see? We cannot promise to do everything you ask, but […]

by Set Hallstrom at May 05, 2016 09:58 AM

fundamental code

Lad Discussion Peaks

A History of LAD As Seen Through Heated Discussion

Warning
Summarizing years of discussions is a difficult task. I do not intend to distort the meaning of quotes and if you have a particular quote which you feel is being misrepresented please let me know. This article is designed to review the community as a whole, not impose my opinions onto it. Posts reflect the sentiment of the user at the time of posting and may very likely not reflect the current state of projects or even the authors

What is LAD?

To get a bigger picture of what exactly has lead up to the current state of affairs within LAD I decided it was a good idea to read through some historic [LAD] discussions which made up some of those peaks in activity. This is somewhat more biased towards the flame wars and community rantings, but those discussions should still reveal plenty about the evolution of pain points within the community. First to frame this community analysis let’s look at how the linux audio mailing list officially defines its goal:

Our goal is to encourage widespread code re-use and cooperation, and to provide a common forum for all audio related software projects and an exchange point for a number of other special-interest mailing lists.

This simply shows that the mailing list should be a cooperative place and somewhere where information should be exchanged. Some medium like this is a pretty darn valuable resource and it was recognized as such early on.

The problem is that most Linux audio apps are developed by people who have full-time jobs doing other things. The problems involved in designing audio apps are so great that even those people who are able to work full time on Linux audio are often stumped as to how to implement the desired solutions.
— Mark Knecht October 2002

With varying levels of success there have been some huge discussions about the tradeoffs for different plugin standards, session managers, licenses, knobs (boy do audio devs love talking about knobs), and a variety of other topics. Even with the advantages that something like a community mailing list offers, it’s questionable whether people really consider linux audio developers as a whole a community.

I think the linux audio world is too small and varied to have a tightly knit organisation like the Gnome guys.
— Steve Harris June 2004
If you want to organize something go ahead and organize it, but please don’t tell me that I have to conform to some consumer driven vision of the great commercial future of Linux Audio.
— Jan Depner June 2004
The notion of "the development community" is a misnomer. In fact, what we have are "development communities" (plural).
— Fred Gleason February 2013

Fundamentally, the 'community' is made out of a large variety of independent individuals who need to have a large spread of specialization in order to make effective software. This typically has manifested itself with many different projects with single developers without a great sense of cohesion. This sort of hobbiest development has produced a lot of content, though overall workflow may fall short of users expectations and many projects are subject to bitrot after the small development team moves onto other projects. Everyone has conflicting ideas on how things should work:

Everyone has their point of view. It’s not like you will tell someone "I want to add this feature to your app/api" and will say "Ok". You will simply get an answers like: -No, sorry, I wont accept that patch, i’d rather the library concentrates only on this. -Why dont you do it as a separate library? -Feel free to fork this and add it yourself. -Yeah I recognize it’s useful, but I think it’s out of place, inconsistent with the rest, that I try to keep simple.

— Juan June 2005

Before moving on to the issues presented in this community I want to take a brief detour showing how the linux-audio-dev mailing list and the linux-audio-user mailing list are linked. Within the overall community you frequently have developers who extensively use other LA tools and you have quite a few users who occasionally dabble in details generally reserved for developers. By looking at how many people fall into each one of these categories as a function of how often they post to LAD/LAU we can see that there is an overlap for casual users and a very strong overlap for heavyweight posters.

lml overall cross posters

This overall trend also exists on a much smaller scale. Within any given month there is a significant number of people who have posted on both lists.

lml monthly cross posters

These individuals tend to generate a very significant number of the total posts in any given month as well.

lml monthly cross posts

By acknowledging this relationship, a good number of the problems observed on the LAD list should correspond to issues visible to users as well. In some cases like the 'What sucks about Linux Audio' threads there have been corresponding threads on both lists. In other cases ideas simply flow from one location to another.

Initial Friction

In the past it wasn’t all that unusual for these disagreements to leak onto the mailing lists where they could grow substantially. A good example of this friction would be the Impro-Visor forking effort in 2009. In this thread a fork of an existing project had been created due to GPL licensing issues, but the way the forking was done produced disagreement within the community.

One of the main reasons why R. Stallman started GNU/FSF/GPL because of it’s social aspect. You learn kids on schools for example to corporate and help each other, being social.
— Grammostola Rosea Aug 2009
Forking a project is by it’s nature, and GPL "rights" aside, quite an impact on the author. He or she may have been sweating over their code base for some time, and i don’t think anyone could say they wouldn’t feel a bit awkward if they saw their code being forked, and developed further. Even more so for those who may not have developed their code under the assumption of GPL. From an "outsider’s" point of view, it would seem like a big decision to take both ways, if both parties have any sort of empathy.
— Alex Stone Aug 2009

The individual forking the project could be described as quite aggressive with his approach which did spawn quite the meandering discussion. This thread was one of the first threads in my reading of [LAD] which seemed to significantly put users off and it certainly didn’t help that in June a rather heated flame war on RealtimeKit had already driven away that project’s developer.

I have been following these list serves for a while, but I am just not interested in this kind of drama, and would like to mention for the record that I will no longer be following the lad or lau list serves.
— Justin Smith July 2009
In the last 18 months in LAD we’ve seen some pretty emotive flamewars about Reaper, LV2 in closed source software, LinuxSampler licensing, plugin output negotiation, JACK packaging, JACK and DBUS, PulseAudio, the way qjackctl starts up jackd, RTKit, and probably some other things I’ve forgotten. And this. This isn’t a high traffic list; the flames quite likely outnumber the rest.
— Chris Cannam July 2009
So now is the time to give your positive feedback and constructive critics. Don’t troll and don’t start another flame war unless your goal is to alienate me to stage of me detaching from this community. I will not respond to trolish and flamish mails, feel free to contact me with private mails if you prefer so.
— Nedko Arnaudov November 2009

As these discussions scale out of proportion it’s easy for them to shift from a heated dialog into a flame war. These flame wars often result in huge misunderstandings, a lot of misinformation, tons of angry emails, and importantly wasted time. Wasting time on these mailing lists is a significant offence if they want to retain users and help keep the discussions targeted and helpful to those involved.

When Flamewars Aren’t Stoked

Of course these so called flame wars are not something which is entirely bad for a community to have.

Most of the occasionally 'caustic' folk in this community …​ understand that heated arguments are just a part of how developers find the best solution, and there is no ill will involved. It’s simply a useful tool/process - and arguably, I would say, the most effective way of hammering out good software design the world has seen to date.

Unfortunately there are always a few childish fools who don’t understand this concept (or think it’s a competition and can’t handle the fact that they were wrong) and elevate silly little arguments into long term personal grudges…​ Like trolls, they are best ignored while the rest of us get on with useful things.

What we’re looking for is less completely irrelevant noise like this. Particularly in response to jokes (blatant smileys and all).

— Drobilla July 2009

When you have a heated discussion while keeping it on topic real work can be done, though it is often off-putting to bystanders and those caught in the middle.

When Flamewars Are Stoked

Generally for a lot of these flame wars to take flight there need to be a large variety of people stoking the flames and not directly contributing to the discussion in a meaningful way (though this is not always the case). In most threads this was done by a variety of users mostly ones who weren’t very frequent posters. There was one repeat offender who during July 2010 really caused quite the meltdown within the LAD mailing list, Ralf Mardorf. I originally wasn’t going to mention this, but essentially all flames and off topic communication that july could be traced back to him.

Who is Ralf Mardorf?

I never programmed anything for Linux. I’m not able to do it and I don’t have the time to learn it.

I subscribed to the list, because I needed some information when I tried to program for Linux audio. I guess you want people to learn how to program for Linux audio. What you’re looking for is an attitude test, not a test about programming knowledge. I’ve got knowledge about programming, not about programming for Linux. You don’t like my attitude, but I hope you like other people who have the attitude that you want, even if they don’t have programming knowledge. (This is another issue, but not that one OS might or might not be good, better or what ever, so I guess I should reply :p)

Btw. on user lists a user don’t get some needed information, e.g. actually about what kernel is fine with rtirq and what kernel isn’t fine with it, so it can become impossible to set up an audio Linux, another reason why I’m subscribed to this list.

I’m and other users are responsible for my/their Linux installations, we should use all available sources to get knowledge. Some, me too, do so. In addition now you expect from users that they also should have the same attitude?

— Ralf Mardorf August 2009

And what happened in July?

Well, it started off in a discussion about MIDI jitter. This is something which can be quantified and discussed in terms of the numbers quite easily. Ralf brought the issue up which could imply some interesting bugs, design flaws, or configuration issues. Some simple tests to find the issue were proposed, but the data was never returned to the list resulting in posts such as:

I know very gifted musicians who do like me and they always 'preach' that I should stop using modern computers and I don’t know much averaged people. So the listeners in my flat for sure would be able to hear even failure that I’m unable to hear.
— Ralf Mardorf July 2010

There is no objective valid timing fluctuation. The musical savant next door might be much more sensitive than I’m, regarding to the groove, I don’t know …​ I guess there doesn’t live a musical savant next door, perhaps I’m this savant ;).

Anyway, forget about my assumptions about ms of jitter. I’m fine with the C64, Atari ST and all those stand alone sequencers from the 80ies. I tested did it, but I’m sure I’ll be able to hear hear the difference to my Linux computer …​ not when listening to all MIDI instruments played alone at the same time, but when listening to MIDI instruments + audio tracks.

— Ralf Mardorf July 2010
Sorry for this PS, I try to learn not to write such a high amount of mails :(, but it could be important.
— Ralf Mardorf July 2010

Of course this was pretty frustrating to a number of developers who wanted to solve the problem at hand.

You are comparing a banana and an orange to find out which one is sweeter. Given the nature of the problem it would help a lot to have as little differences between the systems under test, otherwise it’s impossible to track it down.
— Robin Gareus July 2010
We’re getting seriously off-topic here. After all, this is developer list. What happened to the ALSA MIDI Jitter measurements and test-samples?
— Robin Gareus July 2010

This was followed up by numerous off topic threads. Ralf Mardorf ended up accounting for 44 of 463 posts in June and 165 of 653 messages in July. There are frequent replies to himself and if you look at the timestamps from that month there’s even a period where 7 emails are fired off to the list with no responses from anyone else among them. I’m honestly not sure if this is intentional trolling or not, but when a thread named "STEREO RULES" in all caps is created in the midst of the chaos you have to at least suspect it.

The sort of replies which can be seen in this month highlight some of the major issues at play. Developers generally want to know that their software works and that people can use it. They also crucially have very limited time considering that this work is typically done in addition to their other obligations without any return other than the enjoyment of it.

General Thoughts

So, up to this point in history flamewars have been a problem and they have been fueled by a number of individuals who intentionally or otherwise don’t contribute substantially to the original aim of the discussion. Both users and developers for linux audio software seem frustrated with this as it makes it difficult to obtain information, convey accurate information, and interact with other members of the community without wading through a lot of noise. Some of these issues are mirrored in more recent 'heated discussions', but this writeup is long enough, so that will have to wait for a part two.

May 05, 2016 04:00 AM

May 03, 2016

Libre Music Production - Articles, Tutorials and News

Guitarix 0.35 released including much anticipated interface redesign

Guitarix 0.35 released including much anticipated interface redesign

Guitarix has recently seen a new release, version 0.35. As always there are new plugins and bug fixes but the big news with this release is the overhauled interface, compliments of Markus Schmidt. Markus is also responsible for CALF studio gears plugin design, as well as the DSP of many of it's plugins.  

by Conor at May 03, 2016 07:10 PM

April 27, 2016

rncbc.org

Qtractor 0.7.7 - The Haziest Photon is out!

Hi everybody,

On the wrap of the late miniLAC2016@c-base.org Berlin (April 8-10), where this Yet Same Old Qstuff* (continued) workshop babbling of yours truly (slides, videos) was taken place.

There's really one (big) thing to keep in mind, as always: Qtractor is not, never was, meant to be a do-it-all monolith DAW. Quite frankly it isn't a pure modular model either. Maybe we can agree on calling it a hybrid perhaps? And still, all this time, it has been just truthful to its original mission statement--modulo some Qt major version numbers--nb. it started on Qt3 (2005-2007), then Qt4 (2008-2014), it is now Qt5, full throttle.

Now,

It must have been like start saying uh. this is probably the best dot or, if you rather call it that way, beta release of them all!

Qtractor 0.7.7 (haziest photon) is out!

Everybody is here compelled to update.
Leave no excuses behind.

As for the mission statement coined above, you know it's the same as ever was (and it now goes to eleven years in the making):

Qtractor is an audio/MIDI multi-track sequencer application written in C++ with the Qt framework. Target platform is Linux, where the Jack Audio Connection Kit (JACK) for audio and the Advanced Linux Sound Architecture (ALSA) for MIDI are the main infrastructures to evolve as a fairly-featured Linux desktop audio workstation GUI, specially dedicated to the personal home-studio.

Website:

http://qtractor.sourceforge.net

Project page:

http://sourceforge.net/projects/qtractor

Downloads:

http://sourceforge.net/projects/qtractor/files

Git repos:

http://git.code.sf.net/p/qtractor/code
https://github.com/rncbc/qtractor.git
https://gitlab.com/rncbc/qtractor.git
https://bitbucket.org/rncbc/qtractor.git

Change-log:

  • LV2 UI Touch feature/interface support added.
  • MIDI aware plug-ins are now void from multiple or parallel instantiation.
  • MIDI tracks and buses plug-in chains now honor the number of effective audio channels from the assigned audio output bus; dedicated audio output ports will keep default to the stereo two channels.
  • Plug-in rescan option has been added to plug-ins selection dialog (yet another suggestion by Frank Neumann, thanks).
  • Dropped the --enable-qt5 from configure as found redundant given that's the build default anyway (suggestion by Guido Scholz, thanks).
  • Immediate visual sync has been added to main and MIDI clip editor thumb-views (a request by Frank Neumann, thanks).
  • Fixed an old MIDI clip editor contents disappearing bug, which manifested when drawing free-hand (ie. Edit/Select Mode/Edit Draw is on) over and behind its start/beginning position (while in the lower view pane).

Wiki (on going, help wanted!):

http://sourceforge.net/p/qtractor/wiki/

License:

Qtractor is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Flattr this

 

Enjoy && Have fun.

by rncbc at April 27, 2016 06:30 PM

April 23, 2016

digital audio hacks – Hackaday

Color-Changing LED Makes Techno Music

As much as we like addressable LEDs for their obedience, why do we always have to control everything? At least participants of the MusicMaker Hacklab, which was part of the Artefact Festival in February this year, have learned, that sometimes we should just sit down with our electronics and listen.

With the end of the Artefact Festival approaching, they still had this leftover color-changing LED from an otherwise scavenged toy reverb microphone. When powered by a 9 V battery, the LED would start a tiny light show, flashing, fading and mixing the very best out of its three primary colors. Acoustically, however, it spent most of its time in silent dignity.

singing_led_led_anatomy

As you may know, this kind of LED contains a tiny integrated circuit. This IC pulse-width-modulates the current through the light-emitting junctions in preprogrammed patterns, thus creating the colorful light effects.

To give the LED a voice, the participants added a 1 kΩ series resistor to the LED’s “anode”, which effectively translates variations in the current passing through the LED into measurable variations of voltage. This signal could then be fed into a small speaker or a mixing console. The LED expressed its gratitude for the life-changing modification by chanting its very own disco song.

singing_led_hook_up_schematic

This particular IC seems to operate at a switching frequency of about 1.1 kHz and the resulting square wave signal noticeably dominates the mix. However, not everything we hear there may be explained solely by the PWM. There are those rhythmic “thump” noises, shifts in pitch and amplitude of the sound and more to analyze and learn from. Not wanting to spoil your fun of making sense of the beeps and cracks (feel free to spoil as much as you want in the comments!), we just say enjoy the video and thanks to the people of the STUK Belgium for sharing their findings.


Filed under: digital audio hacks, led hacks

by Moritz Walter at April 23, 2016 11:00 AM

April 22, 2016

open-source – cdm createdigitalmusic

Hack – listen to one LED create its own micro rave

Surprise: there’s a little tiny rave hiding inside a flickering LED lamp from a toy. Fortunately, we can bring it out – and you can try this yourself with LED circuitry, or just download our sound to remix.

Surprise Super Fun Disco LED Hack from Darsha Hewitt on Vimeo.

Amine Metani arvid
But let’s back up and tell the story of how this began.

The latest edition of our MusicMakers Hacklab brought us to Leuven, Belgium, and the Artefact Festival held at STUK. Now, with all these things, very often people come up with lofty (here, literally lofty) ideas – and that’s definitely half the fun. (We had one team flying an unmanned drone as a musical instrument.)

But sometimes it’s simple little ideas that steal the show. And so it was with a single LED superstar. Amine Mentani brought some plastic toys with flickering lights, and participant Arvid Jense, along my co-facilitator and all-around artist/inventor/magician Darsha Hewitt decided to make a sound experiment with them. They were joined by participant (and once European Space Agency artist resident) Elvire Flocken-Vitez.

It seems that the same timing used to make that faux flickering light effect generates analog voltages that sound, well, amazing. (See more on this technique in comments from readers below.)

DarshaHewitt_LEDHACK_01-1024x640

You might not get as lucky as we did with animated LEDs you find – or you might find something special, it’s tough to say. But you can certainly try it out yourself, following the instructions here and on a little site Darsha set up (or in the picture here).

And by the popular demand of all our Hacklabbers from Belgium, we’ve also made the sound itself available. So, you can try remixing thing, sampling it, dancing to it, whatever.

screenshot_344

https://freesound.org/people/dardi_2000/sounds/343087/

More:

http://www.darsha.org/artwork/disco-led-hack/

And follow our MusicMakers series on Facebook (or stay tuned here to CDM).

The post Hack – listen to one LED create its own micro rave appeared first on cdm createdigitalmusic.

by Peter Kirn at April 22, 2016 02:07 PM

GStreamer News

GStreamer Core, Plugins, RTSP Server, Editing Services, Validate 1.8.1 stable release (binaries)

Pre-built binary images of the 1.8.1 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

April 22, 2016 12:00 PM

GStreamer Core, Plugins, RTSP Server, Editing Services, Validate 1.6.4 stable release (binaries)

Pre-built binary images of the 1.6.4 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

April 22, 2016 11:00 AM

April 21, 2016

News – Ubuntu Studio

New Ubuntu Studio Release and New Project Lead!

New Project Lead In January 2016 we had an election for a new project lead, and the winner was Set Hallström, who will be taking over the project lead position right after this release. He will be continuing for another two years until the next election in 2018. The team of developers has also seen […]

by Set Hallstrom at April 21, 2016 04:44 PM

April 20, 2016

GStreamer News

GStreamer Core, Plugins, RTSP Server, Editing Services, Python, Validate, VAAPI 1.8.1 stable release

The GStreamer team is pleased to announce the first bugfix release in the stable 1.8 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.8.0. For a full list of bugfixes see Bugzilla.

See /releases/1.8/ for the full release notes.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, or gstreamer-vaapi, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, or gstreamer-vaapi.

April 20, 2016 04:00 PM

OSM podcast

aubio

node-aubio

node.js

node.js logo

Thanks to Gray Leonard, aubio now has its own bindings for node.js.

A fork of Gray's git repo can be found at:

A simple example showing how to extract bpm and pitch from an audio file with node-aubio is included.

To install node-aubio, make sure libaubio is installed on your system, and follow the instructions at npmjs.com.

April 20, 2016 12:28 PM

April 17, 2016

Libre Music Production - Articles, Tutorials and News

New video tutorial describing a complete audio production workflow using Muse and Ardour

New video tutorial describing a complete audio production workflow using Muse and Ardour

Libre Music Production proudly presents Michael Oswalds new 8+ hours video tutorial describing a complete audio production workflow using Muse and Ardour.

In this tutorial you will learn how to import, clean up and edit a MIDI file using MusE. It then goes on to show how to import the MIDI file into Ardour and setting up instruments to play the song. On to guitar recording and audio editing in Ardour, selecting sounds and editing several takes.

The tutorial continues with vocal recording and editing, mixing and mastering the song.

by admin at April 17, 2016 04:19 PM

A complete audio production workflow with Muse and Ardour

Audio production with Muse and Ardour is a 6 part video tutorial showing a complete workflow using FLOSS audio tools.

In this tutorial you will learn how to import, clean up and edit a MIDI file using MusE. It then goes on to show how to import the MIDI file into Ardour and setting up instruments to play the song.

On to guitar recording and audio editing in Ardour, selecting sounds and editing the takes.

The tutorial continues with vocal recording and editing, mixing and mastering the song.

by admin at April 17, 2016 02:51 PM

April 15, 2016

digital audio hacks – Hackaday

Hackaday Dictionary: Ultrasonic Communications

Say you’ve got a neat gadget you are building. You need to send data to it, but you want to keep it simple. You could add a WiFi interface, but that sucks up power. Bluetooth Low Energy uses less power, but it can get complicated, and it’s overkill if you are just looking to send a small amount of data. If your device has a microphone, there is another way that you might not have considered: ultrasonic communications.

clipThe idea of using sound frequencies above the limit of human hearing has a number of advantages. Most devices already have speakers and microphones capable of sending and receiving ultrasonic signals, so there is no need for extra hardware. Ultrasonic frequencies are beyond the range of human hearing, so they won’t usually be audible. They can also be transmitted alongside standard audio, so they won’t interfere with the function of a media device.

A number of gadgets already use this type of communications. The Google Chromecast HDMI dongle can use it, overlaying an ultrasonic signal on the audio output it sends to the TV. It uses this to pair with a guest device by sending a 4-digit code over ultrasound that authorizes it to join an ad-hoc WiFi network and stream content to it. The idea is that, if the device can’t pick up the ultrasound signal, it probably wasn’t invited to the party.

We reported some time ago on an implementation of ultrasonic data using GNU Radio by [Chris]. His writeup goes into a lot of detail on how he set the system up and shows a simple demo using a laptop speaker and microphone. He used Frequency Shift Keying (FSK) to encode the data into the audio, using a base frequency of 23Khz and sending data in five byte packets.

Since then, [Chris] has expanded his system to create a bi-directional system, where two devices communicate bi-directionally using different frequencies. He also changed the modulation scheme to gaussian frequency shift keying for reliability and even added a virtual driver layer on top, so the connection can transfer TCP/IP traffic. Yup, he built an ultrasonic network connection.

His implementation underlines one of the problems with this type of data transmission, though: It is slow. The speed of the data transmission is limited by the ability of the system to transmit and receive the data, and [Chris] found that he needed to keep it slow to work with cheap microphones and speakers. Specifically, he had to keep the number of samples per symbol used by the GFSK modulation high, giving the receiver more time to spot the frequency shift for each symbol in the data stream. That’s probably because the speaker and microphone aren’t specifically designed for this sort of frequency. The system also requires a preamble before each data packet, which adds to the latency of the connection.

So ultrasonic communications may not be fast, but they are harder to intercept than WiFi or other radio frequency signals. Especially if you aren’t looking for them, which inspired hacker [Kate Murphy] to create Quietnet, a simple Python chat system that uses the PyAudio library to send ultrasonic chat messages. For extra security, the system even allows you to change the carrier frequency, which could be useful if the feds are onto you. Whether overt, covert, or just for simple hardware configuration, ultrasonic communications is something to consider playing around with and adding to your bag of hardware tricks.


Filed under: digital audio hacks, Hackaday Columns, wireless hacks

by Richard Baguley at April 15, 2016 05:01 PM

April 14, 2016

GStreamer News

GStreamer 1.6.4 stable release

The GStreamer team is pleased to announce the second bugfix release in the old stable 1.6 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.6.x. For a full list of bugfixes see Bugzilla.

See /releases/1.6/ for the full release notes.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-editing-services, gst-python, or or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-editing-services. gst-python.

April 14, 2016 06:00 PM

Linux – cdm createdigitalmusic

A totally free DAW and live environment, built in SuperCollider: LNX_Studio

Imagine you had a DAW with lots of live tools and synths and effects – a bit like FL Studio or Ableton Live – and it was completely free. (Free as in beer, free as in freedom.) That’s already fairly cool. Now imagine that everything in that environment – every synth, every effect, every pattern maker – was built in SuperCollider, the powerful free coding language for electronic music. And imagine you could add your own stuff, just by coding, and it ran natively. That moves from fairly cool to insanely cool. And it’s what you get with LNX_Studio, a free environment that runs on any OS (Mac now, other builds coming), and that got a major upgrade recently. Let’s have a look.

LNX_Studio is a full-blown synth studio. You can do end-to-end production of entire tracks in it, if you choose. Included:

  • Virtual analog synths, effects, drum machines
  • Step sequencers, piano roll (with MIDI import), outboard gear control
  • Mix engine and architecture
  • Record audio output
  • Automation, presets, and programs (which with quick recall make this a nice idea starter or live setup
  • Chord library, full MIDI output and external equipment integration

It’s best compared to the main view of FL Studio, or the basic rack in Reason, or the devices in Ableton Live, in that the focus is building up songs through patterns and instruments and effects. What you don’t get is audio input, multitracking, or that sort of linear arrangement. Then again, for a lot of electronic music, that’s still appealing – and you could always combine this with something like Ardour (to stay in free software) when it’s time to record tracks.

Also good in this age of external gear lust, all those pattern generators and MIDI control layouts play nice with outboard gear. There’s even an “external device” which you can map to outboard controls.

But all of this you can do in other software. And it’d be wrong to describe LNX_Studio as a free, poor man’s version of that gear, because it can do two things those tools can’t.

First, it’s entirely networked. You can hop onto a local network or the Internet and collaborate with other users. (Theoretically, anyway – I haven’t gotten to try this out yet, but the configuration looks dead simple.)

Second, and this I did play with, you can write your own synths and effects in SuperCollider and run them right in the environment. And unlike environments like Max for Live, that integration is fully native to the tool. You just hop right in, add some code, and go. To existing SuperCollider users, this is finally an integrated environment for running all your creations. To those who aren’t, this might get you hooked.

Here’s a closer look in pictures:

When you first get started, you're presented with a structured environment to add instruments, effects, pattern generators, and so on.

When you first get started, you’re presented with a structured environment to add instruments, effects, pattern generators, and so on.

Fully loaded, the environment resembles portions of FL Studio or Ableton Live. You get a conventional mixer display, and easy access to your tools.

Fully loaded, the environment resembles portions of FL Studio or Ableton Live. You get a conventional mixer display, and easy access to your tools.

Oh, yeah, and out of the box, you get some powerful, nice-sounding virtual analog synths.

Oh, yeah, and out of the box, you get some powerful, nice-sounding virtual analog synths.

But here's the powerful part - inside every synth is SuperCollider code you can easily modify. And you can add your own code using this powerful, object-oriented, free and open source code environment for musicians.

But here’s the powerful part – inside every synth is SuperCollider code you can easily modify. And you can add your own code using this powerful, object-oriented, free and open source code environment for musicians.

Effects can use SuperCollider code, too. There's also a widget library, so adding a graphical user interface is easy.

Effects can use SuperCollider code, too. There’s also a widget library, so adding a graphical user interface is easy.

But whether you're ready to code or not doesn't matter much - there's a lot to play with either way. Sequencers...

But whether you’re ready to code or not doesn’t matter much – there’s a lot to play with either way. Sequencers…

Drum machines...

Drum machines…

More instruments...

More instruments…

You also get chord generators and (here) a piano roll editor.

You also get chord generators and (here) a piano roll editor.

When you're ready to play with others, there's also network capability for jamming in the same room or over a network (or the Internet).

When you’re ready to play with others, there’s also network capability for jamming in the same room or over a network (or the Internet).

Version 2.0 is just out, and adds loads of functionality and polish. Most importantly, you can add your own sound samples, and work with everything inside a mixer environment with automation. Overview of the new features (in case you saw the older version):

Main Studio
Channel style Mixer
Programs (group & sequence Instrument presets)
Automation
Auto fade in/out
Levels dispay
Synchronise channels independently
Sample support in GS Rhythm & SCCode instruments
WebBrowser for importing samples directly from the internet
Local sample support
Sample Cache for off-line use
Bum Note
Now polyphonic
Added Triangle wave & Noise
High Pass filter
2 Sync-able LFO’s
PWM
Melody Maker module (chord progressions, melodies + hocket)
Inport MIDI files
Audio In
Support for External instruments & effects
Interfaces for Moog Sub37, Roland JP-08, Korg Volca series
Many new instruments & effects added to SCCode & SCCodeF

I love what’s happening with Eurorack and hardware modular – and there’s nothing like physical knobs and cables. But that said, for anyone who brags that modular environments are a “clean slate” and open environment, I think they’d do well to look at this, too. The ability to code weird new instruments and effects to me is also a way to find originality. And since not everyone can budget for buying hardware, you can run this right now, on any computer you already own, for free. I think that’s wonderful, because it means all you need is your brain and some creativity. And that’s a great thing.

Give the software a try:

http://lnxstudio.sourceforge.net

And congrats to Neil Cosgrove for his work on this – let’s send some love and support his way.

The post A totally free DAW and live environment, built in SuperCollider: LNX_Studio appeared first on cdm createdigitalmusic.

by Peter Kirn at April 14, 2016 05:05 PM

blog4

Tina Mariane Krogh Madsen: Body Interfaces: A Processual Scripting

TMS member Tina Mariane Krogh Madsen going to show a week of a durational performative installation with guests, in Berlin at Galerie Grüntaler 9 (at Grüntaler Strasse 9 as the name suggests) from 15. - 22. April:

Body Interfaces: A Processual Scripting is a performative installation generated by Tina Mariane Krogh Madsen over the duration of one week. It wishes to raise questions regarding the role of documentation in artistic research, its status and how it can feed into other processes.
In the spatial frames of Grüntaler9 the artist will be intensively working with and redeveloping her own concept of an archive and resources based on the documents and remains from previous performances and interventions, which will additionally be resulting in other performance structures.
The installation is in an ongoing process that can be witnessed everyday from 2-8pm. On selected days there will be guests invited to discuss and perform with the artist in the space.
::::::::: Tina Mariane Krogh Madsen’s research works with the body and (as) materiality via combining understandings of it that are derived from site-specific performance art and from working with technology.
A crucial part of this research takes the form of interventions and performances, collectively titled Body Interfaces, first generated during a residency in Iceland (May, 2015) and since then developed and performed in various contexts, constantly challenging their own format and method. These practices deal with the body as interface for experience and communication in relation to other materialities as well as the environment that surrounds and interacts with these. The interface is here read as a transmitting entity and agency between the body and the surrounding surfaces. An important part of Body Interfaces is its own documentation, in various formats, shapes and scripted entities.
The processual installation is open daily from 14:00 until 20:00 and can be witnessed at all time. The processual scripting has a dynamic approach to the space and therefore the installation will arrive and evolve throughout the days, nothing has been in installed in advance – all is part of the process.
The research topic will be shared through performances and interventions as well as an ongoing reel of performance documentation.
Friday April 15: inauguration and installation:
- 14:00h - 19:00h: performative installation (working session)
- 20:00h - 20:30: Body Interfaces Performance
- from 21:00: Fridäy Süpperclüb (food and drinks by donation)
Saturday April 16: sound (research collaborator: Malte Steiner):
- 14:00h - 19:00h: performative installation (working session)
- 19:00h: sound performance
Sunday April 17: body and site (research collaborator: Nathalie Fari):
- 14:00 - 17:00: performative installation (working session)
- 17:00 - 20:00: performance interventions with Nathalie Fari
Monday April 18: archiving as practice / restructuring and re-contextualizing materials (research collaborator: Joel Verwimp):
14:00h - 20:00h: performative installation with performance interventions (working session)
Tuesday April 19: chance as method – invigorating performative structures:
- 14:00h - 20:00h: performative installation with performance interventions (working session)
Wednesday April 20: instruction: re-performance / transformation I (research collaborator: Aleks Slota):
- 14:00h - 20:00h: performative installation with performance interventions (working session)
Thursday April 21: ritual(s)
- 14:00h - 20:00h: performative installation with performance interventions (working session)
Friday April 22: instruction: re-performance / transformation II (research collaborator: Ilya Noé):
- 14:00h - 18:00h: performative installation with performance interventions (working session)
- 20:00h: Body Interfaces Processual Scripting Resume
- from 21:00: Fridäy Süpperclüb (food and drinks by donation)





::::::::: Performative schedule for April 15. to 22.:

by herrsteiner (noreply@blogger.com) at April 14, 2016 03:32 PM

April 11, 2016

OpenAV

Fabla2 @ miniLAC video!

In an amazingly short time, the streaming videos of miniLAC are online!! OpenAV’s Fabla2 video linked here, for other streaming links, checkout https://media.ccc.de/v/minilac16-openav. Huge thanks to the Stream-Team for their amazing work! Read more →

by harry at April 11, 2016 09:11 AM

April 06, 2016

Libre Music Production - Articles, Tutorials and News

The Qstuff* Spring'16 Release Frenzy

The Qstuff* Spring'16 Release Frenzy

On the wake of the miniLAC2016@c-base.org Berlin, and keeping up with tradition, the most venerable of the Qstuff* are under so called Spring'16 release frenzy.

Enjoy the party!

by yassinphilip at April 06, 2016 05:01 PM

April 05, 2016

OpenAV

miniLAC 2016!

miniLAC 2016!

Hey, its miniLAC this weekend! Are you near Berlin? You should attend, latest and greatest LinuxAudio demos, software, and meet the community! Checkout the schedule here, OpenAV is running a workshop on Fabla2 – showcasing the advanced features of Fabla2, making it suitable for live-performance, studio grade drums, and lots of fun with the new hardware integration for the Maschine… Read more →

by harry at April 05, 2016 07:35 PM

rncbc.org

Qtractor 0.7.6 - A Hazier Photon is released!


Hey, Spring'16 release frenzy isn't over as of just yet ;)

Keeping up with the tradition,

Qtractor 0.7.6 (a hazier photon) is released!

Qtractor is an audio/MIDI multi-track sequencer application written in C++ with the Qt framework. Target platform is Linux, where the Jack Audio Connection Kit (JACK) for audio and the Advanced Linux Sound Architecture (ALSA) for MIDI are the main infrastructures to evolve as a fairly-featured Linux desktop audio workstation GUI, specially dedicated to the personal home-studio.

Flattr this

Website:

http://qtractor.sourceforge.net

Project page:

http://sourceforge.net/projects/qtractor

Downloads:

http://sourceforge.net/projects/qtractor/files

Git repos:

http://git.code.sf.net/p/qtractor/code
https://github.com/rncbc/qtractor

Change-log:

  • Plug-ins search path and out-of-process (aka. dummy) VST plug-in inventory scanning has been heavily refactored.
  • Fixed and optimized all dummy processing for plugins with more audio inputs and/or outputs than channels on a track or bus where it's inserted.
  • Fixed relative/absolute path mapping when saving/loading custom LV2 Plug-in State Presets.

Wiki (on going, help wanted!):

http://sourceforge.net/p/qtractor/wiki/

License:

Qtractor is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

 

Enjoy && Keep the fun, always.

by rncbc at April 05, 2016 06:30 PM

The Qstuff* Spring'16 Release Frenzy

On the wake of the miniLAC2016@c-base.org Berlin, and keeping up with tradition, the most venerable of the Qstuff* are under so called Spring'16 release frenzy.

Enjoy the party!

Details are as follows...

 

QjackCtl - JACK Audio Connection Kit Qt GUI Interface

QjackCtl 0.4.2 (spring'16) released!

QjackCtl is a(n ageing but still) simple Qt application to control the JACK sound server, for the Linux Audio infrastructure.

Website:
http://qjackctl.sourceforge.net
Downloads:
http://sourceforge.net/projects/qjackctl/files

Git repos:

http://git.code.sf.net/p/qjackctl/code
https://github.com/rncbc/qjackctl

Change-log:

  • Added a brand new "Enable JACK D-BUS interface" option, split from the old common "Enable D-BUS interface" setup option which now refers to its own self D-BUS interface exclusively.
  • Dropped old "Start minimized to system tray" option from setup.
  • Add double-click action (toggle start/stop) to systray (a pull request by Joel Moberg, thanks).
  • Added application keywords to freedesktop.org's AppData.
  • System-tray icon context menu has been fixed/hacked to show up again on Plasma 5 (aka. KDE5) notification status area.
  • Switched column entries in the unified interface device combo-box to make it work for macosx/coreaudio again.
  • Blind fix to a FTBFS on macosx/coreaudio platforms, a leftover from the unified interface device selection combo-box inception, almost two years ago.
  • Prevent x11extras module from use on non-X11/Unix plaforms.
  • Late French (fr) translation update (by Olivier Humbert, thanks).

License:

QjackCtl is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Flattr this

 

Qsynth - A fluidsynth Qt GUI Interface

Qsynth 0.4.1 (spring'16) released!

Qsynth is a FluidSynth GUI front-end application written in C++ around the Qt framework using Qt Designer.

Website:
http://qsynth.sourceforge.net
Downloads:
http://sourceforge.net/projects/qsynth/files

Git repos:

http://git.code.sf.net/p/qsynth/code
https://github.com/rncbc/qsynth

Change-log:

  • Dropped old "Start minimized to system tray" option from setup.
  • CMake script lists update (patch by Orcan Ogetbil, thanks).
  • Added application keywords to freedesktop.org's AppData.
  • System-tray icon context menu has been fixed/hacked to show up again on Plasma 5 (aka. KDE5) notifications status area.
  • Prevent x11extras module from use on non-X11/Unix plaforms.
  • Messages standard output capture has been improved in both ways a non-blocking pipe may get.
  • Regression fix for invalid system-tray icon dimensions reported by some desktop environment frameworks.

License:

Qsynth is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Flattr this

 

Qsampler - A LinuxSampler Qt GUI Interface

Qsampler 0.4.0 (spring'16) released!

Qsampler is a LinuxSampler GUI front-end application written in C++ around the Qt framework using Qt Designer.

Website:
http://qsampler.sourceforge.net
Downloads:
http://sourceforge.net/projects/qsampler/files

Git repos:

http://git.code.sf.net/p/qsampler/code
https://github.com/rncbc/qsampler

Change-log:

  • Added application keywords to freedesktop.org's AppData.
  • Prevent x11extras module from use on non-X11/Unix plaforms.
  • Messages standard output capture has been improved again, now in both ways a non-blocking pipe may get.
  • Single/unique application instance control adapted to Qt5/X11.

License:

Qsampler is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Flattr this

 

QXGEdit - A Qt XG Editor

QXGEdit 0.4.0 (spring'16) released!

QXGEdit is a live XG instrument editor, specialized on editing MIDI System Exclusive files (.syx) for the Yamaha DB50XG and thus probably a baseline for many other XG devices.

Website:
http://qxgedit.sourceforge.net
Downloads:
http://sourceforge.net/projects/qxgedit/files

Git repos:

http://git.code.sf.net/p/qxgedit/code
https://github.com/rncbc/qxgedit

Change-log:

  • Prevent x11extras module from use on non-X11/Unix plaforms.
  • French (fr) translations update (by Olivier Humbert, thanks).
  • Fixed port on MIDI 14-bit controllers input caching.

License:

QXGEdit is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Flattr this

 

QmidiCtl - A MIDI Remote Controller via UDP/IP Multicast

QmidiCtl 0.4.0 (spring'16) released!

QmidiCtl is a MIDI remote controller application that sends MIDI data over the network, using UDP/IP multicast. Inspired by multimidicast (http://llg.cubic.org/tools) and designed to be compatible with ipMIDI for Windows (http://nerds.de). QmidiCtl has been primarily designed for the Maemo enabled handheld devices, namely the Nokia N900 and also being promoted to the Maemo Package repositories. Nevertheless, QmidiCtl may still be found effective as a regular desktop application as well.

Website:
http://qmidictl.sourceforge.net
Downloads:
http://sourceforge.net/projects/qmidictl/files

Git repos:

http://git.code.sf.net/p/qmidictl/code
https://github.com/rncbc/qmidictl

Change-log:

  • Added application keywords to freedesktop.org's AppData.

License:

QmidiCtl is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Flattr this

 

QmidiNet - A MIDI Network Gateway via UDP/IP Multicast

QmidiNet 0.4.0 (spring'16) released!

QmidiNet is a MIDI network gateway application that sends and receives MIDI data (ALSA-MIDI and JACK-MIDI) over the network, using UDP/IP multicast. Inspired by multimidicast and designed to be compatible with ipMIDI for Windows.

Website:
http://qmidinet.sourceforge.net
Downloads:
http://sourceforge.net/projects/qmidinet/files

Git repos:

http://git.code.sf.net/p/qmidinet/code
https://github.com/rncbc/qmidinet

Change-log:

  • Allegedly fixed for the socketopt(IP_MULTICAST_LOOP) reverse semantics on Windows platforms (as suggested by Paul Davis, from Ardour ipMIDI implementation, thanks).
  • Added application keywords to freedesktop.org's AppData.

License:

QmidiNet is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Flattr this

 

Enjoy && keep the fun, always!

by rncbc at April 05, 2016 05:30 PM

April 03, 2016

Pid Eins

Announcing systemd.conf 2016

Announcing systemd.conf 2016

We are happy to announce the 2016 installment of systemd.conf, the conference of the systemd project!

After our successful first conference 2015 we’d like to repeat the event in 2016 for the second time. The conference will take place on September 28th until October 1st, 2016 at betahaus in Berlin, Germany. The event is a few days before LinuxCon Europe, which also is located in Berlin this year. This year, the conference will consist of two days of presentations, a one-day hackfest and one day of hands-on training sessions.

The website is online now, please visit https://conf.systemd.io/.

Tickets at early-bird prices are available already. Purchase them at https://ti.to/systemdconf/systemdconf-2016.

The Call for Presentations will open soon, we are looking forward to your submissions! A separate announcement will be published as soon as the CfP is open.

systemd.conf 2016 is a organized jointly by the systemd community and kinvolk.io.

We are looking for sponsors! We’ve got early commitments from some of last year’s sponsors: Collabora, Pengutronix & Red Hat. Please see the web site for details about how your company may become a sponsor, too.

If you have any questions, please contact us at info@systemd.io.

by Lennart Poettering at April 03, 2016 10:00 PM

Midichlorians in the blood

Taking Back From Android



Android is an operating system developed by Google around the Linux kernel. It is not like any other Linux distribution, because not only many common subsystems have been replaced by other components, but also the user interface is radically different based on Java language running into a virtual machine called Dalvik.

An example of subsystem removed from the Linux kernel is the ALSA Sequencer, which is a key piece for MIDI input/output with routing and scheduling that makes Linux comparable in capabilities to Mac OSX for musical applications (for musicians, not whistlers) and years ahead of Microsoft Windows in terms of infrastructure. Android did not offer anything comparable until Android 6 (Marshmallow).

Another subsystem from userspace Linux not included in Android is PulseAudio. Instead, OpenSL ES that can be found on Android for digital audio output and input.

But Android also has some shining components. One of them is Sonivox EAS (originally created by Sonic Network, Inc.) released under the Apache 2 license, and the MIDI Synthesizer used by my VMPK for Android application to produce noise. Funnily enough, it provided some legal fuel to Oracle in its battle against Google, because of some Java binding sources that were included in the AOSP repositories. It is not particularly outstanding in terms of audio quality, but has the ability of providing real time wavetable GM synthesis without using external soundfont files, and consumes very little resources so it may be indicated for Linux projects on small embedded devices. Let's take it to Linux, then!

So the plan is: for the next Drumstick release, there will be a Drumstick-RT backend using Sonivox EAS. The audio output part is yet undecided, but for Linux will probably be PulseAudio. In the same spirit, for Mac OSX there will be a backend leveraging the internal Apple DLS synth. These backends will be available in addition to the current FluidSynth one, which provides very good quality, but uses expensive floating point DSP calculations and requires external soundfont files.

Meanwhile, I've published on GitHub this repository including a port of Sonivox EAS for Linux with ALSA Sequencer MIDI input and PulseAudio output. It also  depends on Qt5 and Drumstick. Enjoy!

Sonivox EAS for Linux and Qt:
https://github.com/pedrolcl/Linux-SonivoxEas

Related Android project:
https://github.com/pedrolcl/android/tree/master/NativeGMSynth

by Pedro Lopez-Cabanillas (noreply@blogger.com) at April 03, 2016 04:59 PM

March 31, 2016

digital audio hacks – Hackaday

The ATtiny MIDI Plug Synth

MIDI was created over thirty years ago to connect electronic instruments, synths, sequencers, and computers together. Of course, this means MIDI was meant to be used with computers that are now thirty years old, and now even the tiniest microcontrollers have enough processing power to take a MIDI signal and create digital audio. [mitxela]’s polyphonic synth for the ATtiny 2313 does just that, using only two kilobytes of Flash and fitting inside a MIDI jack.

Putting a MIDI synth into a MIDI plug is something we’ve seen a few times before. In fact, [mitxela] did the same thing a few months ago with an ATtiny85, and [Jan Ostman]’s DSP-G1 does the same thing with a tiny ARM chip. Building one of these with an ATtiny2313 is really pushing the envelope, though. With only 2 kB of Flash memory and 128 bytes of RAM, there’s not a lot of space in this chip. Making a polyphonic synth plug is even harder.

The circuit for [mitxela]’s chip is extremely simple, with power and MIDI data provided by a MIDI keyboard, a 20 MHz crystal, and audio output provided eight digital pins summed with a bunch of resistors. Yes, this is only a square wave synth, and the polyphony is limited to eight channels. It works, as the video below spells out.

Is it a good synth? No, not really. By [mitxela]’s own assertion, it’s not a practical solution to anything, the dead bug construction takes an hour to put together, and the synth itself is limited to square waves with some ugly quantization, at that. It is a neat exercise in developing unique audio devices and especially hackey, making it a very cool build. And it doesn’t sound half bad.


Filed under: ATtiny Hacks, digital audio hacks, musical hacks

by Brian Benchoff at March 31, 2016 05:00 AM

blog4

March 29, 2016

GStreamer News

GStreamer Core, Plugins, RTSP Server, Editing Services, Validate 1.8.0 stable release (binaries)

Pre-built binary images of the 1.8.0 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

March 29, 2016 10:00 AM

March 27, 2016

Libre Music Production - Articles, Tutorials and News

Petigor's Tale used Audacity for sound recording

Petigor's Tale used Audacity for sound recording

When the authors of Petigor's Tale, a game developed using Blend4Web, wanted to record and edit sound effects for their upcoming game, their choice fell on Audacity.

Read their detailed blog entry about how the editing and recording was made.

by admin at March 27, 2016 08:38 PM

March 26, 2016

Libre Music Production - Articles, Tutorials and News

DrumGizmo version 0.9.9

DrumGizmo version 0.9.9

DrumGizmo version 0.9.9 is just out!

Highlighted changes / fixes:

  • Switch to LGPLv3
  • Linux VST
  • Embedded UI
  • Prepped for diskstreaming (but not yet implemented in UI)
  • Loads of bug fixes

Read the ChangeLog file for the full list of changes

by yassinphilip at March 26, 2016 06:20 PM

March 24, 2016

GStreamer News

GStreamer Core, Plugins, RTSP Server, Editing Services, Python, Validate, VAAPI 1.8.0 stable release

The GStreamer team is proud to announce a new major feature release in the stable 1.x API series of your favourite cross-platform multimedia framework!

This release has been in the works for half a year and is packed with new features, bug fixes and other improvements.

See /releases/1.8/ for the full list of changes.

Binaries for Android, iOS, Mac OS X and Windows will be provided shortly after the source release by the GStreamer project during the stable 1.8 release series.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, or gstreamer-vaapi, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, or gstreamer-vaapi.

March 24, 2016 10:00 AM

Libre Music Production - Articles, Tutorials and News

AV Linux 2016: The Release

AV Linux 2016: The Release

With this release, Glen is moving away from the 'everything but the kitchen sink' approach and instead is focusing on providing a very stable base suitable for low latency audio production.

by yassinphilip at March 24, 2016 05:43 AM

March 23, 2016

Libre Music Production - Articles, Tutorials and News

Ardour 4.7 released

Ardour 4.7 released

Ardour 4.7 is now available, including a variety of improvements and minor bug fixes. The two most significant changes are:

by yassinphilip at March 23, 2016 11:02 AM

Linux Audio Users & Musicians Video Blog

Come Around – Evergreen

This is a music video of a song recorded/mixed/mastered using Linux
(AV Linux 2016) with Harrison Mixbus 3.1 along with some Calf and linuxDSP
Plugins. This is also the first production from our new ‘Bandshed’ studio
and will be released as part of a full EP in a month or so. The band
‘Evergreen’ is the band my son drums in and ‘Come Around’ is an original
song written by the singer.



by DJ Kotau at March 23, 2016 07:04 AM

March 22, 2016

Libre Music Production - Articles, Tutorials and News

Qtractor 0.7.5 (hazy photon) is out!

Qtractor, the audio/MIDI multi-track sequencer, has reached the 0.7.5 milestone!!


Highlights for this dot/beta release:

by yassinphilip at March 22, 2016 03:03 AM

March 21, 2016

Libre Music Production - Articles, Tutorials and News

Building SuperCollider 3.7.0 from Source (Debian)

Building SuperCollider 3.7.0 from Source (Debian)

A few months ago we published an introduction to the audio programming language SuperCollider here on LMP.  With the recent announcement that SuperCollider had reached 3.7.0, we Debian Linux users suddenly find ourselves behind-the-times regarding our SuperCollider packages which are likely to be at 3.6.6 for some time.  If you want 3.7.0 now (or any bleeding edge version in the future) you have no choice but to build it from source.

by Scott Petersen at March 21, 2016 08:53 PM

rncbc.org

Qtractor 0.7.5 - The Hazy Photon is out!

Hello everybody!

Qtractor 0.7.5 (hazy photon) is out!

It comes with one top recommendation though: please update, at once, while it's hot! :)

Highlights for this dot/beta release:

  • Overlapping clips cross-fade (NEW)
  • MIDI Send/Return and Aux-Send insert plugins (NEW)
  • Generic and custom track icons eye-candy (NEW)

Some other interesting points may be found in the blunt and misty change-log below.

And just in case you missed it before,

Qtractor is an audio/MIDI multi-track sequencer application written in C++ with the Qt framework. Target platform is Linux, where the Jack Audio Connection Kit (JACK) for audio and the Advanced Linux Sound Architecture (ALSA) for MIDI are the main infrastructures to evolve as a fairly-featured Linux desktop audio workstation GUI, specially dedicated to the personal home-studio.

Change-log:

  • Beat unit divisor, aka. the denominator or lower numeral in the time-signature, have now a visible and practical effect over the time-line, even though the standard MIDI tempo(BPM) is always denoted in beats as quarter-notes (1/4, crotchet, seminima) per minute.
  • Fixed an old hack on LV2 State Files abstract/relative file-path mapping when saving custom LV2 Presets (after a related issue on Fabla2, by Harry Van Haaren, thanks).
  • Default PC-Keyboard shortcuts may now be erasable and re-assigned (cf. Help/Shortcuts...).
  • New option on the audio/MIDI export dialog, on whether to add/import the exported result as brand new track(s).
  • Introducing brand new track icons property
  • Old Dry/Wet Insert and Aux-send pseudo-plugin parameters are now split into separate Dry and Wet controls, what else could it possibly be? :)
  • Brand new MIDI Insert and Aux-Send pseudo-plugins are now implemented with very similar semantics as the respective and existing audio counterparts.
  • Implement LV2_STATE__loadDefaultState feature (after pull request by Hanspeter Portner aka. ventosus, thanks).
  • Plug-ins search paths internal logic has been refactored; an alternative file-name based search is now in effect for LADSPA, DSSI and VST plug-ins, whenever not found on their original file-path locations saved in a previous session.
  • Finally added this brand new menu Clip/Cross Fade command, aimed on setting fade-in/out ranges properly, just as far to (auto)cross-fade consecutive overlapping clips.

Enjoy && Keep the fun, always.

Flattr this

Website:

http://qtractor.sourceforge.net

Project page:

http://sourceforge.net/projects/qtractor

Downloads:

http://sourceforge.net/projects/qtractor/files

Git repos:

http://git.code.sf.net/p/qtractor/code
https://github.com/rncbc/qtractor

Wiki (on going, help wanted!):

http://sourceforge.net/p/qtractor/wiki/

License:

Qtractor is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Enjoy && Keep the fun, always.

by rncbc at March 21, 2016 08:00 PM

Libre Music Production - Articles, Tutorials and News

SuperCollider 3.7.0 Released

SuperCollider 3.7.0, over two years in the making, has finally been released!  Additions and fixes include (from the News in 3.7 help file):

by Scott Petersen at March 21, 2016 07:33 PM

March 20, 2016

OpenAV

New Web Host

New Web Host

Hey! Have you noticed OpenAV looks a little fresh again? Yep, we’ve moved to a sparkly new server. Why? Mostly due to a bug in the older server (backstory) which was extremely hard to fix. We’ve migrated to another, and things should be all rosy from now on. Any downtime, please report to our webmaster directly – harryhaaren@gmail.com. Now onwards… Read more →

by harry at March 20, 2016 09:56 PM

March 19, 2016

digital audio hacks – Hackaday

Tombstone Brings New Life to Board

Making revisions to existing PCBs with surface mount components often leads to creative solutions, and this insertion of a switch over a tombstoned resistor is no exception. According to [kubatyszko], “this is an FPGA-based Amiga clone. R15 serves as joint-stereo mixing signal between channels to make it easier on headphone users (Amiga has 4 channels, 2 left and 2 right). Removing R15 makes the stereo 100% ‘original’ with fully independent channels. Didn’t want to make it permanent so I decided to put a switch.”

Whether [kubatyszko] intends it or not, this solution is not going to be permanent without some additional work to mechanically secure the switch. We’ve tried this sort of thing before and it sometimes results in the contact area of the resistor being ripped off the substrate and separated from the rest of the resistor, rendering it useless. However, the creative use of the pads to get some additional functionality out of the board deserves some kudos.

We love creative fixes for board problems but it’s been a really long time since we’ve seen several of them collected in one place. We’d love to hear your favorite tricks so let us know in the comments below.


Filed under: digital audio hacks, misc hacks

by Bob Baddeley at March 19, 2016 11:01 AM

March 18, 2016

Libre Music Production - Articles, Tutorials and News

New Guitarix preset from Sebastian Posch

Sebastian Posch, Guitarix user extra-ordinaire, has released some new videos.

First out is his "Dope: Stoner/Doom Metal Preset for Guitarix":

And while you are at it, why not check out his guitar lesson about string skipping:

by admin at March 18, 2016 07:14 PM

March 16, 2016

Talk Unafraid

The Investigatory Powers Bill for architects and administrators

OK, it’s not the end of the world. But it does change things radically, should it pass third reading in its current form. There is, right now, an opportunity to effect some change to the bill in committee stage, and I urge you to read it and the excellent briefings from Liberty and the Open Rights Group and others and to write to your MP.

Anyway. What does this change in our threat models and security assessments? What aspects of security validation and testing do we need to take more seriously? I’m writing this from my perspective, which is from a small ISP systems perspective, but this contains my personal views, not that of my employer, yada yada.

The threats

First up let’s look at what the government can actually do under this bill. I’m going to try and abstract things a little from the text in the bill, but essentially they can:

  • Issue a technical capability notice, which can compel the organization to make technical changes to provide capability to provide a service to government
  • Compel an individual (not necessarily within your organization) to access data
  • Issue a retention notice, which can compel the organization to store data and make it available through some mechanism
  • Covertly undertake equipment interference (potentially with the covert, compelled assistance of someone in the organization, potentially in bulk)

Assuming we’re handling some users’ data, and want to protect their privacy and security as their bits transit the network we operate, what do we now need to consider?

  • We can’t trust any individual internally
  • We can’t trust any small group of individuals fully
  • We can’t trust the entire organization not to become compromised
  • We must assume that we are subject to attempts at equipment interference
  • We must assume that we may be required to retain more data than we need to

So we’re going to end up with a bigger threat surface and more attractors for external threats (all that lovely data). We’ve got to assume individuals may be legally compelled to act against the best interests of the company’s users – this is something any organization has to consider a little bit, but we’ve always been viewing this from a perspective of angry employees the day they quit and so on. We can’t even trust that small groups are not compromised and may either transfer sensitive data or assist in compromise of equipment

Beyond that, we have to consider what happens if an organizational notice is made – what if we’re compelled to retain none-sampled flow data, or perform deep packet inspection and retain things like HTTP headers? How should we defend against all of this, from the perspective of keeping our users safe?

Motivation

To be clear – I am all for targeted surveillance. I believe strongly we should have well funded, smart people in our intelligence services, and that they should operate in a legal fashion, with clear laws that are up to date and maintained. I accept that no society with functional security services will have perfect privacy.

I don’t think the IPB is the right solution, mind you, but this is all to say that there will always be some need for targeted surveillance and equipment interference. These should be conducted only when a warrant is issued (preferably by a judge and cabinet minister), and ISPs should indeed be legally able to assist in these cases, which requires some loss of security and privacy for those targeted users – and it should be only those users.

I am a paid-up member of the Open Rights Group, Liberty and the Electronic Frontiers Foundation. I also regularly attend industry events in the tech sector and ISP sector in particular. Nobody wants to stop our spies from spying where there’s a clear need for them to do so.

However, as with all engineering, it’s a matter of tradeoffs. Bulk equipment interference or bulk data retention is complete overkill and helps nobody. Covert attacks on UK infrastructure actively weaken our security. So how do we go about building a framework that permits targeted data retention and equipment interference in a secure manner?  Indeed, encourages it at an organizational level rather than forcing it to occur in a covert manner?

Equipment Interference

This is the big one, really. Doesn’t matter how it happens – internally compelled employee, cleaner plugging a USB stick from a vague but menacing government agency into a server and rebooting it, switches having their bootloaders flashed with new firmware as they’re shipped to you, or covert network intrusion. Either way you end up in a situation where what your routers, switches, servers etc are doing things you did not expect, almost certainly without your knowledge.

This makes it practically impossible to ensure they are secure, against any threats. Sure, your Catalyst claims to be running IOS 13.2.1. Your MX-series claims to be running JunOS 15.1. Can we verify this? Maybe. We can use host-based intrusion detection systems to monitor integrity and raise alarms.

Now, proper auditing and logging and monitoring of all these devices, coupled with change management etc will catch most of the mundane approaches – that’s just good infosec, and we have to do that to catch all the criminals, script kiddies and random bots trawling the net for vulnerable hosts. Where it gets interesting is how you protect against the sysadmin themselves.

It feels like we need to start implementing m-in-n authorization to perform tasks around sensitive hosts and services. Some stuff we should be able to lock down quite firmly. Reconfiguring firewalls outside of the managed, audited process for doing so using a configuration management (CM) tool? Clearly no need for this, so why should anyone ever be able to do it? All services in CM, be it Puppet/Salt/Chef, with strongly guarded git and puppet repositories and strong authentication everywhere (keys, proper CA w/ signing for CM client/server auth, etc)? Then why would admins ever need to log into machines? Except inevitably someone does  need to, and they’ll need root to diagnose whatever’s gone wrong, even if the fix is in CM eventually.

We can implement 2-person or even 3-person authentication quite easily, even at small scales, using physical tools – hardware security modules locked in 2-key safes, or similar. But it’s cumbersome and complicated, and doesn’t work for small scales where availability is a concern – is the on-call team now 3 people, and are they all in the office all the time with their keys?

There’s a lot that could be done to improve that situation in low to medium security environments, to stop the simple attacks, to improve the baseline for operational security, and crucially to surface any covert attempts at EI conducted by an individual or from outside, covertly. Organizationally, it’d be best for everyone if the organization were aware of modifications that were required to their equipment.

From a security perspective, a technical capability notice or data retention notice of some sort issued to the company or group of people at least means that a discussion can be had internally. The organization may well be able to assist in minimising collateral damage. Imagine: “GCHQ needs to look at detailed traffic for this list of 10 IPs in an investigation? Okay, stick those subscribers in a separate VLAN once they hit the edge switches, route that through the core here and perform the extra logging here for just that VLAN and they’ve got it! Nobody else gets logged!” rather than “hey, why is this Juniper box suddenly sending a few Mbps from its management interface to some IP in Gloucestershire? And does anyone know why the routing engines both restarted lately?”

Data Retention

This one’s actually pretty easy to think about. If it’s legally compelled by a retention or technical capability notice, you must retain as required, and store it as you would your own browser history – in a write-only secure enclave, with vetted staff, ISO27K1 compliant processes (plus whatever CESG requires), complete and utter segmentation from the rest of the business, and whatever “request filter” the government requires stays in there with dedicated, highly monitored and audited connectivity.

What’s that, you say? The government is not willing to pay for all that? The overhead of such a store for most small ISPs (<100,000 customers) would be huge. We’re talking millions if not more per small ISP (ispreview.co.uk lists 237 ISPs in the UK). Substantial office space, probably 5 non-technical and 5 technical staff at minimum, a completely separate network, data diodes from the collection systems, collection systems themselves, redundant storage hardware, development and test environments, backups (offsite, of course – to your second highly secure storage facility), processing hardware for the request filter, and so on. Just the collection hardware might be half a million pounds of equipment for a small ISP. If the government start requiring CESG IL3 or higher, the costs keep going up. The code of practice suggests bulk data might just be held at OFFICIAL – SENSITIVE, though, so IL2 might be enough.

The biggest risk to organizations when it comes to data retention is that the government might not cover your costs – they’re certainly not required to. And of course the fact that you’re the one to blame if you don’t secure it properly and it gets leaked. And the fact that every hacker with dreams of identity theft in the universe now wants to hack you so bad, because you’ve just become a wonderfully juicy repository of information. If this info got out, even for a small ISP, and we’re talking personally-identifiable flow information/IP logs – which is what “Internet Connection Records” look/sound like, though they’re still not defined – then Ashley Madison, TalkTalk and every other “big data breach” would look hilariously irrelevant by comparison. Imagine what personal data you could extract from those 10,000 users at that small ISP! Imagine how many people’s personal lives you could utterly destroy, by outing them as gay, trans or HIV positive, or a thousand other things. All it would take is one tiny leak.

You can’t do anything to improve the security/privacy of your end users – at this point, you’re legally not allowed to stop collecting the data. Secure it properly and did I mention you should write to your MP while the IPB is at committee stage?

If you’ve not been served with a notice: carry on, business as usual, retain as little as possible to cover operational needs and secure it well.

Auditing

Auditing isn’t a thing that happens enough.

I always think that auditing is a huge missed opportunity. We do pair programming and code review in the software world, so why not do terminal session reviews? If X logs into a router, makes 10 changes and logs out, yes we can audit the config changes and do other stateful analysis, but we can audit those commands as a single session. It feels like there’s a tool missing to collate logs from something like syslog and bring them together as a session, and then expose that as a thing people can look over, review, and approve or flag for discussion.

It’s a nice way for people to learn, too – I’ve discovered so many useful tools from watching my colleagues hack away at a server, and about the only way I can make people feel comfortable working with SELinux is to walk them through the quite friendly tools.

Auditing in any case should become a matter of course. Tools like graylog2, ELK and a smorgasbord of others allow you to set up alerts or streams on log lines – start surfacing things like root logins, su/sudo usage, and “high risk” commands like firmware updates, logging configuration, and so on. Stick a display on your dashboards.

Auditing things that don’t produce nice auditable logs is of course more difficult – some firewalls don’t, some appliances don’t. Those just need to be replaced or wrapped in a layer that can be audited. Web interface with no login or command audit trail? Stick it behind a HTTPS proxy that does log, and pull out POSTs. Firewall with no logging capability? Bin it and put something that does in. Come on, it’s 2016.

Technical capability notices and the rest

This is the unfixable. If you get handed a TCN, you basically get to do what it says. You can appeal on the grounds of technical infeasibility, but not proportionality or anything like that. So short of radically decentralizing your infrastructure to make it technically too expensive for the government, you’re kind of stuck with doing what they say.

The law is written well enough to prevent obvious loopholes. If you’re an ISP, you might consider encryption – you could encrypt data at your CPEs, and decrypt it on your edge. You could go a step further and not decrypt it at all, but pass it to some other company you notionally operate extraterritorially, who decrypt it and then send it on its way from there. But these come with potentially huge cost, and in any case the TCN can require you to remove any protection you applied or are in a position to remove if practical.

We can harden infrastructure a little – things like using n-in-m models, DNScrypt for DNS lookups from CPEs, securely authenticating provisioning servers and so on. But there is no technical solution for a policy problem – absolutely any ISP, CSP, or 1-man startup in the UK is as powerless as the next if the government rocks up with a TCN requiring you to store all your customers’ data or to install these black boxes everywhere your aggregation layer connects to the core or whatever.

Effectively, then, the UK industry is powerless to prevent the government from doing whatever the hell it likes, regardless of security or privacy implications, to our networks, hardware and software. We can take some steps to mitigate covert threats or at least give us a better chance of finding them, and we can make some changes which attempt to mitigate against compelled (or hostile) actors internally – there’s an argument that says we should be doing this anyway.

And we can cooperate with properly-scoped targeted warrants. Law enforcement is full of good people, trying to do the right thing. But their views on what the right thing to do is must not dictate political direction and legal implementation while ignoring the technical realities. To do so is to doom the UK to many more years with a legal framework which does not reflect reality, and actively harms the security of millions of end users.

by James Harrison at March 16, 2016 09:49 PM

ardour

Reduced service, March 17th-29th

Starting on March 17th anybody who requires assistance with subscriptions, website registration and so forth will need to wait until the 29th. I (Paul) will be travelling and likely without any internet access during that time. I will survey emails when I get back, but older forum posts will not get scanned, so if you have an issue related to the those things, please send me mail (paul@linuxaudiosystems.com) rather than assume that a forum post will lead to action - it will not.

Friendly users and some developers will likely still answer other questions posted to the forums, so don't feel limited in that respect.

read more

by paul at March 16, 2016 03:32 PM

March 15, 2016

OSM podcast

GStreamer News

GStreamer Core, Plugins, RTSP Server, Editing Services, Python, Validate, VAAPI 1.8.0 release candidate 2 (1.7.91)

The GStreamer team is pleased to announce the second release candidate of the stable 1.8 release series. The 1.8 release series is adding new features on top of the 1.0, 1.2, 1.4 and 1.6 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework.

Binaries for Android, iOS, Mac OS X and Windows will be provided separately during the stable 1.8 release series.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, or gstreamer-vaapi, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, or gstreamer-vaapi.

March 15, 2016 01:00 PM

March 13, 2016

digital audio hacks – Hackaday

A Pi Powered Recording Studio

In the mid-90s, you recorded your band’s demo on a Tascam cassette tape deck. These surprisingly cheap four-track portable studios were just low tech enough to lend an air of authenticity to a band that calls itself, ‘something like Pearl Jam, but with a piano’. These tape decks disappeared a decade later, just like your dreams of being a rock star, replaced with portable digital recording studios.

The Raspberry Pi exists, the Linux audio stack is in much better shape than it was ten years ago, and now it’s possible to build your own standalone recording studio. That’s exactly what [Daniel] is doing for our Raspberry Pi Zero contest, and somewhat predictably he’s calling it the piStudio.

Although the technology has moved from cassette tapes to CompactFlash cards to hard drives, the design of these four-track mini recording studios hasn’t really changed since their introduction in the 1980s. There are four channels, each with a fader, balance, EQ, and a line in and XLR jack. There are master controls, a few VU meters, and if the technology is digital, a pair of MIDI jacks. Since [Daniel] is using a Raspberry Pi for this project, he threw in an LCD for a great user interface.

As with all digital recorders, the money is in the analog to digital converters. [Daniel] is using a 24-bit, 216kHz, four-channel chip, Texas Instruments’ PCM4204. That’s more than enough to confuse the ears of an audiophile, although that much data will require a hard drive. Good thing there will be SATA.

Although you can buy an eight-channel solid state recorder for a few hundred dollars – and [Daniel] will assuredly put more than that into this project, it’s a great application of a ubiquitous Linux computer for a device that’s very, very useful.


Raspberry_Pi_LogoSmall

The Raspberry Pi Zero contest is presented by Hackaday and Adafruit. Prizes include Raspberry Pi Zeros from Adafruit and gift cards to The Hackaday Store!
See All the Entries || Enter Your Project Now!


Filed under: digital audio hacks, Raspberry Pi

by Brian Benchoff at March 13, 2016 08:00 PM

March 11, 2016

ardour

Subscription/Payment Problems (Part 2)

There continue to be issues with our interactions with PayPal over the last several days. PayPal required some small changes to the way things work (good changes, that help with security), but they also changed some minor details that broke our payment processing system in subtle ways.

If you made a payment or tried to set up a subscription in the period March 8th - March 11th at about 15:30h UTC, and things did not work as you expected, please email me at paul@linuxaudiosystems.com and we'll make it right.

The problems are believed to be fixed now. Apologies for the errors and inconvenience.

read more

by paul at March 11, 2016 08:39 PM

Libre Music Production - Articles, Tutorials and News

MiniLAC Berlin 2016

linuxaudio  MiniLAC is a more compact, community-driven version of the yearly Linux Audio Conference.

A place to talk to developers of your favorite Linux (capable) audio software, music producers, developers coming together, thinking about the future shape of Linux Audio.

 

 

by yassinphilip at March 11, 2016 05:15 AM

DFasma 1.4.4 released

DFasma is a free open-source software which is used to compare audio files in time and frequency. The comparison is first visual, using wavforms and spectra. It is also possible to listen to time-frequency segments in order to allow perceptual comparison. It is basically dedicated to analysis. Even though there are basic functionnalities to align the signals in time and amplitude, this software does not aim to be an audio editor.

by yassinphilip at March 11, 2016 04:27 AM

March 09, 2016

open-source – cdm createdigitalmusic

Ableton just released every last detail of how Push 2 works

Wish granted, hackers. The full specification for Ableton’s Push 2 hardware is now online on GitHub, after passionate Live users clamored for its release. And there’s a lot. This isn’t just a MIDI specification (though that’s there). Every minute detail of how colors appear on LEDs gets covered. (The color “white” has its own section. Yeah, like that minute.) Every animation. The pixels that show up on the display. This isn’t just a guide to how to hack Push 2 – though it’s certainly that. It’s a technical bible on how Push 2 works.

Here, the easiest way to express this is actually to post the table of contents:

1. Introduction
1.1. Purpose
1.2. Architecture Overview
2. MIDI Interface
2.1. MIDI Interface Access
2.2. MIDI Messages
2.3. MIDI Mapping
2.4. Sysex Commands
2.4.1. General Command Format
2.4.2. Command List
2.5. MIDI Mode
2.6. LEDs
2.6.1. Setting LED Colors
2.6.2. RGB LED Color Processing
2.6.3. White LED Color Processing
2.6.4. Touch Strip LED Color Processing
2.6.5. Default Color Palettes
2.6.6. White Balance
2.6.7. Global LED Brightness
2.6.8. LED Animation
2.6.9. PWM Frequency
2.7. Buttons
2.8. Pads
2.8.1. Velocity Curve
2.8.2. Pad Parameters
2.8.3. Individual Pad Calibration
2.8.4. Aftertouch
2.9. Encoders
2.10. Touch Strip
2.10.1. Touch Strip Configuration
2.11. Pedals
2.11.1. Pedal Sampling
2.11.2. Pedal Configuration
2.12. Display Backlight
2.13. Device Inquiry
2.14. Statistics
3. Display Interface
3.1. USB Display Interface Access
3.2. Display Interface Protocol
3.2.1. Frame Header
3.2.2. Pixel Data
3.2.3. Pixel Color Encoding
3.2.4. XORing Pixel Data
3.2.5. Frame Buffering
3.2.6. Allocating Libusb Transfers
4. Appendix A: MIDI Implementation Chart

I imagine this could inspire a whole lot of different people.

1. For the curious, you can learn how Push 2 works, just by browsing. (I had a manual to the Space Shuttle as a kid; this is sort of like that for hardware controller fans.)

2. If you’re working on a Max for Live patch, you can now easily learn how to make even minor modifications and hacks.

3. Developers working on Push 2 controller support now can do all kinds of new things.

4. People wanting to use Push 2 with hardware other than Live now can go about that in more powerful ways (and that’s possible, too, because all of this works regardless of OS and host).

5. People designing their own DIY hardware can learn from what Ableton have done. Yes, heck, that includes competitors – but to be honest, those competitors probably could figure this out on their own. And, oh, by the way, competitors will also be under equal pressure to reciprocate, which is good for all of us.

It’s not really “open source” – Ableton owns everything you see here. It wouldn’t really make sense to modify, anyway, as it’s tied specifically to the Push 2 hardware. It’s more like public source – but that’s still a good thing.
https://github.com/Ableton/push-interface

I think it’s all intensely healthy. I’d still like to see an API on the side of the Live software that makes it easier to make modifications. But this is indisputably good news.

And since I really have no idea what people will do with it, let us know what you do intend to do with it – or share the results.

The post Ableton just released every last detail of how Push 2 works appeared first on cdm createdigitalmusic.

by Peter Kirn at March 09, 2016 06:27 PM

Google are giving away experiments to teach music and code

Technology’s record in the last century was often replacing music making with music consumption. But in this century, that might turn around. Google seems to hope so. Today, the company posted a set of free sound toys on its site, all running in your browser. They’re fun diversions for now – but thanks to open code and powerful new browser features, they could become more.

theremin

You’ve possibly come across the first experiment, available as a Google Doodle on the search engine’s homepage. In that, Clara Rockmore teaches you a simple melody on a simulated Theremin.

But there are more – and education is the apparent goal. Google says they’re assembling the so-called “Chrome Music Lab” in honor of “Music In Our Schools month.” The idea is to let you explore how music works.

musiclab

Perhaps more interesting than that, though, is how these experiments are delivered. By running in the browser, it’s possible to make lessons available instantly, anywhere – and to let you interact with them at your own pace.

“Chrome” is of course featured, but I found myself running experiments in Safari, too, without incident.

screenshot_267

The funny thing is, we’re now catching up to an idea that was already in proof of concept form some twenty years earlier. For example, Morton Subotnick may be known to most these days as a pioneer of composition and modular synthesis – and he is that – but in 1995, he did something very like these experiments. “Morton Subotnick’s Making Music” for Voyager included a series of interactive toys intended to allow kids to play with advanced music concepts quickly. Back then, the delivery mechanism was multimedia CD-ROM, requiring Mac and PC machines with particular specs to realize the content.

Apps, of course, did this (particularly on iOS) – and sure enough, there’s a Subotnick painting app for iPad from a few years ago. But the ability of the browser to catch up means the chance for the screen used for everything else to become musical.

Now, while kids might use the Chrome Music Lab to learn about music, coders might use it to learn about code. Each example is free and open source, so you can learn from it and modify it.

And the “lab” includes software built on key, open technologies. Sound is delivered via the Web Audio API. WebGL and PIXI.JS make powerful graphics and animation easier. There are also tools (Tone.js and a microphone API) that make adding sound functionality less of a chore.

Just click the question mark icon on any experiment, and you’ll find information on the developer and a link to code on GitHub.

There’s even a chance to teach code and music at once – for kids, too:
https://musiclab.chromeexperiments.com/Oscillators

I find it actually a bit curious that Google says these are collaborations between coders and musicians – of course, these days, there’s often no distinction. But I can imagine collaborations between coder-musicians, music educators, and more expanding around this. The browser could be a place where we mess with music and learn literacy in music, expression, code, and math.

And if you aren’t optimistic about that, well… uh… share this and likely just listen to all the coworkers / family members / coffee shop goers who are making strange Theremin sounds and dragged away from things like American Presidential politics. I rest my case.

https://musiclab.chromeexperiments.com

And a blog post on the idea:
Introducing Chrome Music Lab

The post Google are giving away experiments to teach music and code appeared first on cdm createdigitalmusic.

by Peter Kirn at March 09, 2016 04:05 PM

March 05, 2016

GStreamer News

Orc 0.4.25 bug-fix release

The GStreamer team announces another maintenance bug-fix release of liborc, the Optimized Inner Loop Runtime Compiler. Main changes since the previous release:

  • compiler: also prefer the backup function when no target, instead of trying to use emulation which is usually slower
  • executor: fix load of parameters smaller than 64 bits, fixing crashes on ldresnearb and friends in emulated code
  • Only check for Android's liblog on Android targets, so we don't accidentally pick up another liblog that may exist elsewhere
  • Make -Bsymbolic check in configure work with clang

Direct tarball download: orc-0.4.25.

March 05, 2016 12:30 AM

March 02, 2016

rncbc.org

Vee One Suite 0.7.4 - The Ninth-bis beta release


Hello again,

The so called Vee One Suite aka. the gang of three of old-school software instruments, respectively synthv1, as a polyphonic subtractive synthesizer, samplv1, a polyphonic sampler synthesizer and drumkv1 as one drum-kit sampler, are released yet again in their ninth official beta second iteration (bis).

Still available in dual form:

  • a pure stand-alone JACK client with JACK-session, NSM (Non Session management) and both JACK MIDI and ALSA MIDI input support;
  • a LV2 instrument plug-in.

As simple as a change-log may go:

  • Fixed the DCF Formant filter voice initialization reset.
  • French translation updated (by Olivier Humbert, thanks).

The Vee One Suite are free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

And so they go again!

synthv1 - an old-school polyphonic synthesizer

synthv1 0.7.4 (ninth-bis official beta) is out!

synthv1 is an old-school all-digital 4-oscillator subtractive polyphonic synthesizer with stereo fx.

LV2 URI: http://synthv1.sourceforge.net/lv2

website:
http://synthv1.sourceforge.net

downloads:
http://sourceforge.net/projects/synthv1/files

git repos:
http://git.code.sf.net/p/synthv1/code
https://github.com/rncbc/synthv1

Flattr this

samplv1 - an old-school polyphonic sampler

samplv1 0.7.4 (ninth-bis official beta) is out!

samplv1 is an old-school polyphonic sampler synthesizer with stereo fx.

LV2 URI: http://samplv1.sourceforge.net/lv2

website:
http://samplv1.sourceforge.net

downloads:
http://sourceforge.net/projects/samplv1/files

git repos:
http://git.code.sf.net/p/samplv1/code
https://github.com/rncbc/samplv1

Flattr this

drumkv1 - an old-school drum-kit sampler

drumkv1 0.7.4 (ninth-bis official beta) is out!

drumkv1 is an old-school drum-kit sampler synthesizer with stereo fx.

LV2 URI: http://drumkv1.sourceforge.net/lv2

website:
http://drumkv1.sourceforge.net

downloads:
http://sourceforge.net/projects/drumkv1/files

git repos:
http://git.code.sf.net/p/drumkv1/code
https://github.com/rncbc/drumkv1

Flattr this

Enjoy && keep the fun ;)

by rncbc at March 02, 2016 07:00 PM

March 01, 2016

GStreamer News

GStreamer Core, Plugins, RTSP Server, Editing Services, Python, Validate, VAAPI 1.8.0 release candidate 1 (1.7.90)

The GStreamer team is pleased to announce the first release candidate of the stable 1.8 release series. The 1.8 release series is adding new features on top of the 1.0, 1.2, 1.4 and 1.6 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework.

Binaries for Android, iOS, Mac OS X and Windows will be provided separately during the stable 1.8 release series.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, or gstreamer-vaapi, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, or gstreamer-vaapi.

March 01, 2016 06:00 PM

February 29, 2016

OSM podcast

Audio, Linux and the combination

MOD DUO has arrived !

Hi all !

I know that it has been a loooong time since i posted anything but i do have a life you know ;-)

Anyway, i just wanted to share that my MOD DUO arrived and my son an I made a little MOD DUO unboxing video about it !



Great device, really nice build and so far the interface just blew me away !
I plan on doing some more vids on the MOD, but no promises !

Enjoy !


by noreply@blogger.com (Thijs Van Severen) at February 29, 2016 08:23 AM

Linux Audio Users & Musicians Video Blog

No Sister – Yassin Philip

New Track and video by Philip Yassin – Pop Tech

Video made with Blender VSE, Audio produced with Ubuntu, Qtractor, Calf Plugins and JAMin for mastering.



by DJ Kotau at February 29, 2016 08:21 AM

February 24, 2016

OpenAV

FLOSS Weekly : Stream Available

FLOSS Weekly : Stream Available

The image says it all – the FLOSS weekly interview is available – check it out to hear the latest about OpenAV, MOD Devices if you missed the live-stream! Thanks to Randal and Guillermo, and all the crew at FLOSS Weekly podcast. Professional gentlemen, a pleasure to have been on the show. Thanks! Read more →

by harry at February 24, 2016 09:47 PM

February 22, 2016

ardour

Nightly/Development News: "tabbed' has landed

If you use the nightly builds at http://nightly.ardour.org/ or if you build your own version of Ardour from git (for yourself or others), please be aware that at about 20:30 GMT, the master branch was merged with the "tabbed" branch and thus the resulting builds will be substantively different from any older versions.

The "tabbed" branch features two important changes from previous versions of Ardour. First and foremost, both the editor and mixer windows (along with the preferences window) are by default displayed as tabs in a single window. The tabs can be torn off to create detached versions, and the program will remember this state. Secondly, the entire mechanism for keyboard shortcuts has been completely redesigned to allow us to break away more easily from the constraints that GTK+ (our GUI toolkit) was imposing on us.

The "tabbed" branch was under development for months, and has received some testing by a handful of kind and brave users. We nevertheless expect some breakage to emerge as more people start trying it out.

If you use nightly builds or build Ardour yourself from git, please take a moment to consider the implications of your next "update". That said, please test it out and let us know what you think. There are lots of details left to be worked on before we consider this ready for release, and it will be a better release the more feedback we get.

read more

by paul at February 22, 2016 08:43 PM

rncbc.org

Vee One Suite 0.7.3 - The Ninth beta release

The Vee One Suite aka. the gang of three old-school software instruments, synthv1 polyphonic synthesizer, samplv1 polyphonic sampler and drumkv1 drum-kit sampler, are now released in their ninth official beta iteration.

All available in dual form:

  • a pure stand-alone JACK client with JACK-session, NSM (Non Session management) and both JACK MIDI and ALSA MIDI input support;
  • a LV2 instrument plug-in.

In common to all three comes the spartan change-log:

  • Avoid out-of-bound MIDI events as much as possible, coping with LV2 plug-in hosts that feed/run them in on border line circumstances (as reported by Thorsten Wilms, on suspected Ardour looping crash/bug, probably fixed already, thanks).
  • Tentatively safe defaults are being introduced to internal OUT FX buffer-sizes, as read from JACK buffer-size changes and LV2 block-length instantiation bound options.
  • Added application keywords to freedesktop.org's AppData.

The Vee One Suite are free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

And here they are again!

synthv1 - an old-school polyphonic synthesizer

synthv1 0.7.3 (ninth official beta) is now released!

synthv1 is an old-school all-digital 4-oscillator subtractive polyphonic synthesizer with stereo fx.

LV2 URI: http://synthv1.sourceforge.net/lv2

website:
http://synthv1.sourceforge.net

downloads:
http://sourceforge.net/projects/synthv1/files

git repos:
http://git.code.sf.net/p/synthv1/code
https://github.com/rncbc/synthv1

Flattr this

samplv1 - an old-school polyphonic sampler

samplv1 0.7.3 (ninth official beta) is released!

samplv1 is an old-school polyphonic sampler synthesizer with stereo fx.

LV2 URI: http://samplv1.sourceforge.net/lv2

website:
http://samplv1.sourceforge.net

downloads:
http://sourceforge.net/projects/samplv1/files

git repos:
http://git.code.sf.net/p/samplv1/code
https://github.com/rncbc/samplv1

Flattr this

drumkv1 - an old-school drum-kit sampler

drumkv1 0.7.3 (ninth official beta) is released!

drumkv1 is an old-school drum-kit sampler synthesizer with stereo fx.

LV2 URI: http://drumkv1.sourceforge.net/lv2

website:
http://drumkv1.sourceforge.net

downloads:
http://sourceforge.net/projects/drumkv1/files

git repos:
http://git.code.sf.net/p/drumkv1/code
https://github.com/rncbc/drumkv1

Flattr this

Enjoy && have (lots of) fun ;)

by rncbc at February 22, 2016 08:00 PM

OpenAV

FLOSS Weekly interview OpenAV and MOD!

FLOSS Weekly interview OpenAV and MOD!

Yep, you read that right: OpenAV and MOD are doing a live-interview tomorrow, for FLOSS Weekly – watch it here https://twit.tv/live! Going live 5:30pm Ireland/UK Time, use this link to check for your timezone! Have a question? Join the #IRC room #twitlive here! Looking forward! Read more →

by harry at February 22, 2016 07:40 PM

February 21, 2016

fundamental code

Linux Musician Forum Community

After looking at the steady decline within the mailing lists several people thought that it was symptomatic of individuals moving onto other means of discussion. Looking at other mailing lists within the Linux Audio (LA) community, this didn’t really seem to be the case. If not mailing lists, then there’s always the possibility that people have moved onto forums or social media.

One of the biggest forums out there in LA is Linux Musicians (LM). From my own experience it’s a reasonably active site with plenty of LA users and a few LA devs. Based upon the names involved within the LA mailing lists and LM forums I don’t see a migration, but what is mostly a new community. Rather than speculate on these impressions, let’s look at some data provided by a helpful LM administrator.

First there’s the total monthly posts:

2008 2016 lm posts

From 2013 onward this activity exceeds the activity seen on the linux-audio-user (LAU) mailing list. There’s still the possibility that this is simply a small collection of very active users though, so let’s look at the number of active monthly posters:

2008 2016 lm posters

Here the steady growth can be seen up through mid-2013. The overall number of active monthly users is lower than what was previously seen in LAU, however the user-base seems remarkably stable. There’s a very slight decline in the 2014-2016 period, but that could very well be noise. So far there doesn’t seem to be a sharp upturn in new users around the start of the LA mailing list decline of 2010.

This growth can be broken into a few segments if the individual sub-forums are inspected. The significant sub-forums with over 2000 total posts were: "Computer Related Hardware", "Developer’s Section", "KXStudio Discussion", "Linux Distributions & Other Software", "New Linux Music News", "New? We’re glad you’re here", "Plugins, Effects and Instruments", "Recorders & Sequencers", and "System Tuning and Configuration". First let’s see their overall posting activity:

2008 2016 lm subforum posts

This figure explains a few features of the overall posting:

  • kxstudio is responsible for the spike in activity around 2013

  • the developer section is a very small fraction of the community

  • growing interest in plugins has help offset decreased postings in recent years

This growth can be similarly seen in the active monthly users plot:

2008 2016 lm subforum posters

Without really digging through the forums I’d say mid-2012 looks like a combination of software releases with kxstudio driving traffic to the site. This is when the bulk of users seem to start flowing in. The "new users" slow spike around 2013 really shows the influx of users. Once again in this view there’s a small population of developers.

In summary, LM seems to be a very healthy forum who’s success can be primarily traced back to 2012-2014. The LM user-base appears to be in much better shape in terms of consistent activity and scale of activity when compared to the LAU mailing list. There is limited adoption for developer-developer conversation on LM, but there is active feedback between developers and users as shown by the consistent "new linux music news" sub-forum activity.

February 21, 2016 05:00 AM

February 12, 2016

digital audio hacks – Hackaday

Embed with Elliot: Audio Playback with Direct Digital Synthesis

Direct-digital synthesis (DDS) is a sample-playback technique that is useful for adding a little bit of audio to your projects without additional hardware. Want your robot to say ouch when it bumps into a wall? Or to play a flute solo? Of course, you could just buy a cheap WAV playback shield or module and write all of the samples to an SD card. Then you wouldn’t have to know anything about how microcontrollers can produce pitched audio, and could just skip the rest of this column and get on with your life.

Harmonic distortion down ~45db on an Arduino~45db signal to noise ratio from an Arduino

But that’s not the way we roll. We’re going to embed the audio data in the code, and play it back with absolutely minimal additional hardware. And we’ll also gain control of the process. If you want to play your samples faster or slower, or add a tremolo effect, you’re going to want to take things into your own hands. We’re going to show you how to take a single sample of data and play it back at any pitch you’d like. DDS, oversimplified, is a way to make these modifications in pitch possible even though you’re using a fixed-frequency clock.

The same techniques used here can turn your microcontroller into a cheap and cheerful function generator that’s good for under a hundred kilohertz using PWM, and much faster with a better analog output. Hackaday’s own [Bil Herd] has a nice video post about the hardware side of digital signal generation that makes a great companion to this one if you’d like to go that route. But we’ll be focusing here on audio, because it’s easier, hands-on, and fun.

We’ll start out with a sample of the audio that you’d like to play back — that is some data that corresponds to the voltage level measured by a microphone or something similar at regular points in time. To play the sample, all we’ll need to do is have the microcontroller output these voltages back at exactly the same speed. Let’s say that your “analog” output is via PWM, but it could easily be any other digital-to-analog converter (DAC) of your choosing. Each sample period, your code looks up a value and writes it out to the DAC. Done!

(In fact, other than reading the data from an SD card’s filesystem, and maybe having some on-board amplification, that’s about all those little WAV-player units are doing.)

Pitch Control

In the simplest example, the sample will play back at exactly the same pitch it was recorded if the sample playback rate equals the input sampling rate. You can make the pitch sound higher by playing back faster, and vice-versa. The obvious way to do this is to change the sample-playback clock. Every period you play back one the next sample, but you change the time between samples to give you the desired pitch. This works great for one sample, and if you have infinitely variable playback rates available.

Woof!Woof!

But let’s say that you want to take that sample of your dog barking and play Beethoven’s Fifth with it. You’re going to need multiple voices playing the sample back at different speeds to make the different pitches. Playing multiple pitches in this simplistic way, would require multiple sample-playback clocks.

Here’s where DDS comes in. The idea is that, given a sampled waveform, you can play nearly any frequency from a fixed clock by skipping or repeating points of the sample as necessary. Doing this efficiently, and with minimal added distortion, is the trick to DDS. DDS has its limits, but they’re mostly due to the processor you’re using. You can buy radio-frequency DDS chips these days that output very clean sampled sine waves up to hundreds of megahertz with amazing frequency stability, so you know the method is sound.

Example

Let’s make things concrete with a simplistic example. Say we have a sample of a single cycle of a waveform that’s 256 bytes long, and each 8-bit byte corresponds to a single measured voltage at a point in time. If we play this sample back at ten microseconds per sample we’ll get a pitch of 1 / (10e-06 * 256) = 390.625 Hz, around the “G” in the middle of a piano.

Imagine that our playback clock can’t go any faster, but we’d nonetheless like to play the “A” that’s just a little bit higher in pitch, at 440 Hz. We’d be able to play the “A” if we had only sampled 227 bytes of data in the first place: 1 / (10e-06 * 227) = 440.53, but it’s a little bit late to be thinking of that now. On the other hand, if we just ignored 29 of the samples, we’d be there. The same logic works for playing lower notes, but in reverse. If some samples were played twice, or even more times, you could slow down the repetition rate of the cycle arbitrarily.

In the skipping-samples case, you could just chop off the last 29 samples, but that would pretty seriously distort your waveform. You could imagine spreading the 29 samples throughout the 256 and deleting them that way, and that would work better. DDS takes this one step further by removing different, evenly spaced samples with each cycle through the sampled waveform. And it does it all through some simple math.

accumulator_wheelThe crux is the accumulator. We’ll embed the 256 samples in a larger space — that is we’ll create a new counter with many more steps so that each step in our sample corresponds to many numbers in our larger counter, the accumulator. In my example code below, each of the 256 steps gets 256 counts. So to advance one sample per period, we need to add 256 to the larger counter. To go faster, you add more than 256 each period, and to go slower, add less. That’s all there is to it, except for implementation details.

In the graph here, because I can’t draw 1,024 tick marks, we have 72 steps in the accumulator (the green outer ring) and twelve samples (inner, blue). Each sample corresponds to six steps in the accumulator. We’re advancing the accumulator four steps per period (the red lines) and you can see how the first sample gets played twice, then the next sample played only once, etc. In the end, the sample is played slower than if you took one sample per time period. If you take more than six steps in the increment, some samples will get skipped, and the waveform will play faster.

Implementation and Build

So let’s code this up and flash it into an Arduino for testing. The code is up at GitHub for you to follow along. We’ll go through three demos: a basic implementation that works, a refined version that works a little better, and finally a goofy version that plays back single samples of dogs barking.

Filter Filter “circuit”

In overview, we’ll be producing the analog output waveforms using filtered PWM, and using the hardware-level PWM control in the AVR chip to do it. Briefly, there’s a timer that counts from 0 to 255 repeatedly, and turns on a pin at the start and turns it off at a specified top value along the way. This lets us create a fast PWM signal with minimal CPU overhead, and it uses a timer.

Still some jaggies left. Could use better filter.Still some jaggies left. Could use better filter.

We’ll use another timer that fires off periodically and runs some code, called an interrupt service routine (ISR), that loads the current sample into the PWM register. All of our DDS code will live in this ISR, so that’s all we’ll focus on.

If this is your first time working directly with the timer/counters on a microcontroller, you’ll find some configuration code that you don’t really have to worry about. All you need to know is that it sets up two timers: one running as fast as possible and controlling a PWM pin for audio output, and another running so that a particular chunk of code is called consistently, 24,000 times per second in this example.

So without further ado, here’s the ISR:

struct DDS {
    uint16_t increment;
    uint16_t position;
    uint16_t accumulator;
    const int8_t* sample;   /* pointer to beginning of sample in memory */
};
volatile struct DDS voices[NUM_VOICES];

ISR(TIMER1_COMPA_vect) {
    int16_t total = 0;

    for (uint8_t i = 0; i < NUM_VOICES; i++) {
        total += (int8_t) pgm_read_byte_near(voices[i].sample + voices[i].position);

        /* Take an increment step */
        voices[i].accumulator += voices[i].increment;
        voices[i].position += voices[i].accumulator / ACCUMULATOR_STEPS;
        voices[i].accumulator = voices[i].accumulator % ACCUMULATOR_STEPS;
        voices[i].position = voices[i].position % SAMPLE_LENGTH;
    }

    total = total / NUM_VOICES;
    OCR2A = total + 128; // add in offset to make it 0-255 rather than -128 to 127
}

The first thing the code does is to define a (global) variable that will hold the state of each voice for as many voices as we want, defined by NUM_VOICES. Each voice has an increment which determines how many steps to take in the accumulator per sample output. The position keeps track of exactly which of the 256 samples in our waveform data is currently playing, and the accumulator keeps track of the rest. Here, we’re also allowing for each voice to play back a different waveform table from memory, so the code needs to keep track of the address where each sample begins. Changing which sample gets played back is as simple as pointing this variable to a different memory location, as we’ll see later. For concreteness, you can imagine this sample memory to contain the points in a sine wave, but in practice any repetitive waveform will do.

scope_7 scope_6 scope_8 scope_10

So let’s dive into the ISR, and the meat of the routine. Each update cycle, the sum of the output on the different voices is calculated in total. For each voice, the current sample is read from memory, added to the total and then incremented to the next step. Here we get to see how the accumulator works. The increment variable is added to the accumulator. When the accumulator is larger than the number of steps per sample, the position variable gets moved along. Next, the accumulator is shrunk back down to just the remainder of the un-accounted-for values using the modulo operator, and the sample position is wrapped around if necessary with another modulo.

Division?? Modulo??

If you’ve worked with microcontrollers before, alarm bells may be going off in your head right now. The AVR doesn’t have a built-in division routine, so that could take a lot of CPU power. And the modulo operator is even worse. That is, unless the divisor or modulo are powers of two. In those cases, the division is the same as shifting the binary number to the right by the number of bits in the power of two.

A similar operation makes the modulo tolerable. If, for instance, you want a number to be modulo eight, you can simply drop all of the binary bits that correspond to values eight and higher. So, x % 8 can be implemented as x & 0b00000111 where this logical-ANDing just keeps the least-significant three bits. If you’re not in tune with your inner bit-flipper, this can be viewed as a detail — but just know that division and modulo aren’t necessarily bad news if your compiler knows how to implement them efficiently when you choose powers of two for the divisors.

And that gets us to the end of the routine. The sample values were added together, so now they need dividing by the number of voices and centering around the mid-point to fit inside the 8-bit range that the PWM output register requires. As soon as this value is loaded into memory, the PWM hardware will take care of outputting the right waveform on its next cycle.

Refinements

The ISR above is already fairly streamlined. It’s avoided the use of any if statements that would otherwise slow it down. But it turns out we can do better, and this optimized form is often the way you’ll see DDS presented. Remember, we’re running this ISR (in this example) 24,000 times per second — any speedup inside the ISR makes a big difference in overall CPU usage.

The first thing we’ll do is make sure that we have only 256 samples. That way, we can get rid of the line where we limit the sample index to being within the correct range simply by using an 8-bit variable for the sample value. As long as the number of bits in the sample index matches the length of the sample, it will roll over automatically.

We can use the same logic to merge the sample and accumulator variables above into a single variable. If we have an 8-bit sample and an 8-bit accumulator, we combine them into a 16-bit accumulator where the top eight bits correspond to the sample location.

struct DDS {
    uint16_t increment;
    uint16_t accumulator;
    const int8_t* sample;   /* pointer to beginning of sample in memory */
};
volatile struct DDS voices[NUM_VOICES];

ISR(TIMER1_COMPA_vect) {
    int16_t total = 0;

    for (uint8_t i = 0; i < NUM_VOICES; i++) { total += (int8_t) pgm_read_byte_near(voices[i].sample + (voices[i].accumulator >> 8));
        voices[i].accumulator += voices[i].increment;
    }
    total = total / NUM_VOICES;
    OCR2A = total + 128; // add in offset to make it 0-255 rather than -128 to 127
}

You can see that we’ve dropped the position value from the DDS structure entirely, and that the ISR is significantly streamlined in terms of lines of code. (It actually runs about 10% faster too.) Where previously we played the sample at sample + position, we are now playing the sample at sample + (accumulator >> 8). This means that the effective position value will only advance once every 256 steps of the increment — the high eight bits only change once all of the low 256 steps have been stepped through.

None of this is strange if you think about it in base 10, by the way. You’re used to counting up to 99 before the third digit flips over to 100. Here, we’re just using the most-significant bits to represent the sample step, and the number of least-significant bits determines how many increments we need to make before a step is taken. This method is essentially treating the 16-bit accumulator as a fixed-point 8.8 position value, if that helps clear things up. (If not, I’m definitely going to write something on fixed-point math in the future.) But that’s the gist of it.

This is the most efficient way that I know to implement a DDS routine on a processor with no division, but that’s capable of doing bit-shifts fairly quickly. It’s certainly the classic way. The catch is that both the number of samples has to be a power of two, the number of steps per sample has to be a power of two, and the sum of both of them has to fit inside some standard variable type. In practice, this often means 8-bit samples with 8-bit steps or 16-bit samples with 16-bit steps for most machines. On the other hand, if you only have a 7-bit sample, you can just use nine bits for the increments.

Goofing Around: Barking Dogs

As a final example, I’d like to run through the same thing again but for a simple sample-playback case. In the demos above we played repeating waveforms that continually looped around on themselves. Now, we’d like to play a sample once and quit. Which also brings us to the issue of starting and stopping the playback. Let’s see how that works in this new ISR.

struct Bark {
    uint16_t increment = ACCUMULATOR_STEPS;
    uint16_t position = 0;
    uint16_t accumulator = 0;
};
volatile struct Bark bark[NUM_BARKERS];

const uint16_t bark_max = sizeof(WAV_bark);

ISR(TIMER1_COMPA_vect) {
    int16_t total = 0;

    for (uint8_t i = 0; i < NUM_BARKERS; i++) {
        total += (int8_t)pgm_read_byte_near(WAV_bark + bark[i].position);

        if (bark[i].position < bark_max){    /* playing */
            bark[i].accumulator += bark[i].increment;
            bark[i].position += bark[i].accumulator / ACCUMULATOR_STEPS; 
            bark[i].accumulator = bark[i].accumulator % ACCUMULATOR_STEPS;
        } else {  /*  done playing, reset and wait  */
            bark[i].position = 0;
            bark[i].increment = 0;
        }
    }
    total = total / NUM_BARKERS;
    OCR2A = total + 128; // add in offset to make it 0-255 rather than -128 to 127
}

The code here is broadly similar to the other two. Here, the wavetable of dogs barking just happened to be 3,040 samples long, but since we’re playing the sample once through and not looping around, it doesn’t matter so much. As long as the number of steps per position (ACCUMULATOR_STEPS) is a power of two, the division and modulo will work out fine. (For fun, change ACCUMULATOR_STEPS to 255 from 256 and you’ll see that the whole thing comes crawling to a stop.)

The only difference here is that there’s an if() statement checking whether we’ve finished playing the waveform, and we explicitly set the increment to zero when we’re done playing the sample. The first step in the wavetable is a zero, so not incrementing is the same as being silent. That way, our calling code only needs to set the increment value to something non-zero and the sample will start playing.

If you haven’t already, you should at least load this code up and look through the main body to see how it works in terms of starting and stopping, playing notes in tune, and etcetera. There’s also some thought that went into making the “synthesizer” waveforms in the first examples, and into coding up sampled waveforms for use with simple DDS routines like this. If you’d like to start off with a sample of yourself saying “Hackaday” and running that in your code, you’ll find everything you need in the wave_file_generation folder, written in Python. Hassle me in the comments if you get stuck anywhere.

Conclusion

DDS is a powerful tool. Indeed, it’s more powerful than we’ve even shown here. You can run this exact routine at up to 44 kHz, just like your CD player, but of course at an 8-bit sample depth instead of 16. You’ll have to settle for two or three voices instead of four because that speed is really taxing the poor little AVR inside an Uno. With a faster CPU, you can not only get out CD-quality audio, but you can do some real-time signal processing on it as well.

And don’t even get me started on what chips like the Analog Devices high-speed DDS chips that can be had on eBay for just a few dollars. They’re doing the exact same thing, for a sinewave, at very high speed and frequency accuracy. They’re a far cry from implementing DDS in software on an Arduino to make dogs bark, but the principle is the same.


Filed under: digital audio hacks, Hackaday Columns, slider

by Elliot Williams at February 12, 2016 06:01 PM

February 11, 2016

News – Ubuntu Studio

Ubuntu Studio Xenial Xerus Wallpaper Contest Voting!

First, our warmest thanks to all who have submitted their works to this competition. The level of artistic talent on display is truly impressive! Now that the submission period has ended it is time to select 16 new Ubuntu Studio wallpapers. Please review the photo pool and vote! Review wallpapers HERE Vote HERE Voting begins: […]

by Set Hallstrom at February 11, 2016 11:06 AM

January 28, 2016

autostatic.com

Using the Tascam US-144MKII with Linux

Today I got a Tascam US-144MKII from a colleague because he couldn’t use it anymore with Mac OSX. Apparently this USB2.0 audio interface stopped working on El Capitan. Tascam claims they’re working on a driver but they’re only generating bad publicity with that announcement it seems. So he gave it to me, maybe it would work on Linux.

Tascam US-144MKIITascam US-144MKII

First thing I did was plugging it in. The snd_usb_122l module got loaded but that was about it. So much for plug and play. There are reports though that this interface should work so when I got home I started digging a bit deeper. Apparently you have to disable the ehci_hcd USB driver, which is actually the USB2.0 controller driver, and force the US-144MKII to use the uhci_hcd USB1.1 driver instead so that it thinks it’s in USB1.1 mode. This limits the capabilities of the device but my goal for today was to get sound out of this interface, not getting the most out of it.

I quickly found out that on my trusty XPS13 forcing USB1.1 was probably not going to work because it only has USB3.0 ports. So I can disable the ehci_hcd driver but then it seems the xhci_hcd USB3.0 driver takes over. And disabling that driver effectively disables all USB ports. So I grabbed an older notebook with USB2.0 ports and disabled the ehci_hcd driver by unbinding it since it’s not compiled as a module. Unbinding a driver is done by writing the system ID of a device to a so-called unbind file of the driver that is bound to this device. In this case we’re interested in the system ID’s of the devices that use the ehci_hcd driver which can be found in /sys/bus/drivers/ehci-pci/:

# ls /sys/bus/pci/drivers/ehci-pci/
0000:00:1a.7  bind  new_id  remove_id  uevent  unbind
# echo -n "0000:00:1a.7" > /sys/bus/pci/drivers/ehci-pci/unbind

This will unbind the ehci_hcd driver from the device with system ID 0000:00:1a.7 which in this case is an USB2.0 controller.When plugging in the USB interface it now got properly picked up by the system and I was greeted with an active green USB led on the interface as proof.

$ cat /proc/asound/cards
 0 [Intel          ]: HDA-Intel - HDA Intel
                      HDA Intel at 0xf4800000 irq 46
 1 [US122L         ]: USB US-122L - TASCAM US-122L
                      TASCAM US-122L (644:8020 if 0 at 006/002

So ALSA picked it up as a device but it doesn’t show up in the list of sound cards when issuing aplay -l. This is because you have to tell ALSA to talk to the device in a different way then to a normal audio interface. Normally an audio interface can be addressed by using the hw plugin which is the most low-level ALSA plugin that does nothing more than talking to the driver and this is what most applications use, including JACK. The US-144MKII works differently though, its driver snd_usb_122l has to be accessed with the use of the usb_stream plugin which is part of the libasound2-plugins package and that allows you to set a PCM device name that can be used with JACK for instance. This can be done with the following .asoundrc file that you have to create in the root of your home directory:

pcm.us-144mkii {
        type usb_stream
        card "US122L"
}

ctl.us-144mkii {
        type hw
        card "US122L"
}

What we do here is creating a PCM device called us-144mkii and coupling that to the card name we got from cat /proc/asound/cards which is US122L. Of course you can name the PCM device anything you want. Almost all other examples name it usb_stream but that’s a bit confusing because that is the name of the plugin and you’d rather have a name that has some relation to the device you’re using. Also practically all examples use card numbers. But who says that the USB audio interface will always be card 0, or 1. It could also be 2, or 10 if you have 9 other audio interfaces. Other examples work around this by fixing the order of the numbers that get assigned to each available audio interface by adjusting the index parameter for the snd_usb_122l driver. But why do that when ALSA also accepts the name of the card? This also makes thing a lot easier to read, it’s now clear that we are coupling the PCM name us-144mkii to the card named US122L. And we’re avoiding having to edit system-wide settings. The ctl stanza is not strictly necessary but it prevents the following warning when starting JACK:

ALSA lib control.c:953:(snd_ctl_open_noupdate) Invalid CTL us-144mkii
control open "us-144mkii" (No such file or directory)

So with the .asoundrc in place you can try starting JACK:

$ jackd -P85 -t2000 -dalsa -r48000 -p512 -n2 -Cus-144mkii -Pus-144mkii
jackd 0.124.2
Copyright 2001-2009 Paul Davis, Stephane Letz, Jack O'Quinn, Torben Hohn and others.
jackd comes with ABSOLUTELY NO WARRANTY
This is free software, and you are welcome to redistribute it
under certain conditions; see the file COPYING for details

no message buffer overruns
JACK compiled with System V SHM support.
loading driver ..
apparent rate = 48000
creating alsa driver ... us-144mkii|us-144mkii|512|2|48000|0|0|nomon|swmeter|-|32bit
configuring for 48000Hz, period = 512 frames (10.7 ms), buffer = 2 periods
ALSA: final selected sample format for capture: 24bit little-endian in 3bytes format
ALSA: use 2 periods for capture
ALSA: final selected sample format for playback: 24bit little-endian in 3bytes format
ALSA: use 2 periods for playback

This translates to the following settings in QjackCtl:

QjackCtl Settings – ParametersQjackCtl Settings – Parameters QjackCtl Settings – AdvancedQjackCtl Settings – Advanced

Don’t expect miracles of this setup. You won’t be able to achieve super low-latencies but at least you can still use your Tascam US-144MKII instead of having to give it away to a colleague.

The post Using the Tascam US-144MKII with Linux appeared first on autostatic.com.

by jeremy at January 28, 2016 08:56 PM

January 25, 2016

OSM podcast

January 22, 2016

fundamental code

Linux Audio Slowdown

Is it just me or have things seemed a little quieter in the Linux audio community?

At first I thought it was just what I was seeing and where I read my news from, but out of curiosity I looked into the mailing list. Looking at the archived gzips of the months gone by it was relatively clear there wasn’t a ton of very active months recently, but that could be a fluke. Digging a little deeper I scraped the archives to look at both posts frequency and the authors involved with the discussions on the development, users, and announcement mailing lists. The post counts were pretty readily reflected in the size of each monthly archive and plotting the data did show some weak downward trends.

2002 2016 la posts

Of course this doesn’t really capture who is talking about what all that well. Seeing the trend in unique authors per month painted a much more convincing picture for both the user and development mailing lists.

2002 2016 la authors

Within each list there’s a fluctuation every year corresponding to more activity during the summer and less during the winter months, but the downward trend is cutting through all of that. These plots roughly match up with the total posts starting to decline around 2010. While I wouldn’t expect the current trend to continue as quickly as it has the developer’s mailing list is on a trajectory to be virtually dead around 2018. Not exactly great news for a community which needs individuals with specialized development skills to prosper.

Looking at when people stop posting is another way to view this problem, where in LAD about 10 people post their final post per month and LAU about 15 people per month stop posting.

2002 2016 la last post

Some of this can be explained by the huge number of people who only post once or twice on the mailing list, but that doesn’t change the fact that fewer people are joining the discussion. Looking at the number of new posters and final posters we can see that 2010 pops up again as the point where both lists start to lose more users than they gain.

2003 2015 la lost users

I don’t really quite know why this is happening. It could be that mailing lists have been fragmented over the years into project specific ones, though the mailing lists which I tracked down largely seem to have a similar decline within the audio community. It could be that mailing lists are simply less popular, though other lists such as those for blender, arch linux, and the gimp don’t necessarily show this trend. There could be a decline in the overall community size as the data hints.

These are all possible and given the data out there most of this is a shot in a dimly lit room. I’m interested to see if some of these dynamics can be linked back to some of the months with explosive traffic. As can be seen in the first plot, some months had a few hundred messages more than the surrounding months. While these spikes don’t necessarily draw in more unique authors than average, they may reveal some long term pain points for why interest appears to be waning on the lists.

January 22, 2016 05:00 AM

January 21, 2016

GStreamer News

GStreamer Core, Plugins, RTSP Server, Editing Services, Validate 1.6.3 stable release (binaries)

Pre-built binary images of the 1.6.3 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

January 21, 2016 01:00 PM

January 20, 2016

News – Ubuntu Studio

Ubuntustudio 16.04 Wallpaper Contest!

Contest Entries are here!! >>> https://www.flickr.com/groups/ubuntustudiocreations/pool/ <<< Where is YOUR entry? Ubuntustudio 16.04 will be officially released in April 2016. As this will be a Long Term Support version, we are most anxious to offer an excellent experience on the desktop. Therefore, the Ubuntustudio community will host a wallpaper design contest! The contest is open to […]

by Set Hallstrom at January 20, 2016 11:44 AM

January 03, 2016

autostatic.com

Using a Qtractor MIDI track for both MIDI and audio

Basically Qtractor only does either MIDI or audio. The MIDI tracks are for processing MIDI and the audio tracks for processing audio. But a MIDI track in Qtractor can also post-process the audio coming out of a synth plug-in with FX plug-ins so it’s a bit more than just a MIDI track.

But what about plug-ins that do both audio and MIDI, like the LV2 version of the autotuner application zita-at1? If you put it in an audio track it will happily autotune all the audio but it won’t accept any incoming MIDI to pitch only to the MIDI notes it is being fed. And no way you can get MIDI into a Qtractor audio track. There’s no MIDI insert plug-in or the possibility to somehow expose MIDI IN ports of a plug-in in an audio track to Jack MIDI or ALSA.

But Qtractor does have a built-in Insert plug-in that can be fed audio from an audio bus and since a Qtractor MIDI track does know how to handle audio would it also know how to deal with such an insert? Well, yes it can which allows you to use a plug-in like the LV2 version of zita-at1 inside a MIDI track.

Setting up buses and tracks

You will need at least one bus and two tracks (of course you can use different bus and track names):

  • AutoTuneMix bus, input only and 2 channels
  • AutoTune MIDI track with dedicated audio outputs (this will create an audio bus called AutoTune)
  • AutoTuneMix audio track with the AutoTuneMix as input bus

Alternatively you could also skip the use of dedicated audio outputs and have the MIDI track output to the Master bus. This way you avoid the risk of introducing extra latency and the need to set up extra connections. You do lose the flexibility then to do basic stuff on the outcoming audio like panning or adjusting the gain. Which you can also workaround of course by using additional panning and/or gain plug-ins.

Once you’ve created the bus and the tracks insert the following plug-ins into the AutoTune MIDI track:

  • Insert
  • Any pre-processing effects plug-ins (like a compressor) – optional
  • LV2 version of zita-at1 autotuner
  • Any post-processing effects plug-ins (like a reverb) – optional

Insert them into this specific order. It is also possible to do the post-processing in the AutoTuneMix audio track. Now open the Properties window of the Insert plug-in and then open the Returns window. Connect the mic input of your audio device to the Insert/in ports as shown below.

Qtractor AutoTune InsertQtractor AutoTune Insert

Connect the AutoTune bus outputs to the AutoTuneMix inputs:

Qtractor ConnectionsQtractor Connections

Create a MIDI clip with notes to autotune

Create a MIDI clip with the notes you would like to get autotuned in the AutoTune MIDI track, put the play-head on the right position and press play. Now incoming audio from the mic input of your audio device should get autotuned to the MIDI notes you entered in the MIDI clip:

Qtractor Mixer with LV2 version of zita-at1 autotunerQtractor Mixer with LV2 version of zita-at1 autotuner

As you can see both MIDI and audio goes through the AT1 autotuner plug-in and the outcoming audio is being fed into the AutoTuneMix track where you can do the rest of your post-processing if you wish.

The post Using a Qtractor MIDI track for both MIDI and audio appeared first on autostatic.com.

by jeremy at January 03, 2016 07:02 PM

January 01, 2016

Scores of Beauty

“Arnold”

Nearly a year ago I introduced you to significant improvements in LilyPond’s handling of alternative notation fonts and promised a second post. Well, finally here it is, and if you read through it to the end you’ll see why it’s not coincidental that it appears on new year.

Note: This post is based on functionality provided by openLilyLib. openLilyLib is undergoing several substantial changes, therefore information and/or links in this article may not be valid anymore when you read it. If you should notice any such issues please either add a comment or contact us by other means.

New Font Loading Mechanism

The first post ended with a music example using an alternative font, Improviso, which could be activated by these trivial lines:

\include "openlilylib"
\useLibrary Stylesheets
\useNotationFont Improviso
Improviso default appearance (click to view PDF)

Improviso default appearance (click to view PDF)

This is of course a huge improvement over the former approach where you had to add a #(define fonts function in a document’s \paper block in order to change fonts. (And this had also been a huge improvement over what you had to do before.)

But today I would like to direct your attention to a specific issue: the default appearance. Maybe you notice that it’s not just the notation font but the whole score that looks different from usual LilyPond output. If you would simply replace the notation font using the (set-global-fonts) approach shown in the previous post the score would look like this:

Improviso without default stylesheet (click to view PDF)

Improviso without default stylesheet (click to view PDF)

Of course all engraving details in LilyPond’s default appearance have carefully been adjusted to match the appearance of its default font, Emmentaler, the most obvious aspects being the thickness of all sorts of lines. Different fonts may require different settings to look good, and a “handwritten” font like Improviso does so for sure.

Default Stylesheets

So what is the magic that \useNotationFont applies? Well, we have provided default stylesheets for most of the alternative notation fonts, and when the fonts are invoked like this the corresponding stylesheet is automatically loaded. As a result you don’t have to care about adjusting LilyPond’s engraving settings to a different font, everything is done automatically in most cases! Cool, isn’t it?

In order to give you more control over the loading of the font \useNotationFont provides an optional \with {} argument that can be used to set several options through key = value pairs. The above example has been created using the following command:

\useNotationFont \with {
  style = none
} Improviso

Setting style to none causes the font to be activated without any stylesheet, which usually makes sense when you want to define a stylesheet from scratch. But the style option can also be used to select alternative stylesheets (although no such stylesheets are available yet). We intend to also enable locally defined stylesheets, but this is part of the quite long todo list.

Individually Selecting A Brace Font

Most of the alternative fonts ship with a corresponding font for braces, but not all. By default \useNotationFont will fall back to the default Emmentaler when no brace font can be found. However, using the brace option you can control the selection of a brace font, choosing an arbitrary brace font.

\useNotationFont \with {
  brace = "Gutenberg1939"
} LilyJAZZ
LilyJAZZ font with Gutenberg1939 brace font (click to view PDF)

LilyJAZZ font with Gutenberg1939 brace font (click to view PDF)

This example shows a combination of clearly distinct fonts chosen to make the point clear. Nobody would want that in a real-world score, but a practical use case might be to use the Sebastiano brace font – instead of falling back to Emmentaler – together with Scorlatti which doesn’t have its own dedicated brace font.

Arnold: A New Font With Extended Features

I had something like this new interface in mind for some time, but I was finally pushed toward its implementation through the new font Arnold that Abraham Lee created upon my suggestion and that is now (as soon as the site is online again) available from fonts.openlilylib.org. It instantly recreates the atmosphere of a certain repertoire and the characteristic appearance of its original editions. Just let me show you a typical example:

Alban Berg: From

Alban Berg: From “Four pieces” op. 5 (click to view PDF)

A basic stylesheet has been applied, but beyond that no further attempts have been made to tweak this to completely match the original edition. Nevertheless the similarity to Universal Edition scores from around 1910-1920 is astonishing.

In order to show some glyphs of the new font I have deliberately added wrong content here and there – please forgive me if you’re a purist. But once more I have to point towards LilyPond’s outstanding quality of default engraving. There is one limitation that had to be handled manually: LilyPond will always align dynamic letters horizontally to their notes, which doesn’t work well in cramped scores like this. So the f in the middle of the clarinet part, the ff at the beginning of the piano, and the f towards the end of the piano left hand have manually been shifted to the left – please note that I didn’t position them exactly but only added the necessary space so LilyPond could do the actual placement. But apart from this the whole sample has been engraved fully automatically by LilyPond, without the need for any manual intervention.

Interface for Font Extensions

As the previous heading suggests Arnold has “extended features” – and openLilyLib provides a convenient way to access them. The repertoire this font is targeting makes common use of some notation elements not supported by LilyPond and Emmentaler, namely two articulations to indicate strong and weak beats, and marks to indicate principal and secondary voices. Additionally I noticed that the reference scores I investigated contained two different glyphs for the accent, and it seemed a nice idea to provide both. Abraham provided these items as additional named glyphs. LilyPond doesn’t make use of them by default but it is possible to access such additional glyphs directly. Therefore I created markup commands and custom articulations and included them in an “arnold-extensions” stylesheet for convenient access. This extension stylesheet is made available through the syntax

\useNotationFont \with {
  extensions = ##t
}
Arnold

The new commands provided by Arnold extensions are: \arnoldWeakbeat, \arnoldStrongbeat, \arnoldVaraccent articulations, \hauptstimme, \nebenstimme, \endstimme markup commands, and finally \altAccent and \defAccent to permanently switch the accent glyph. You can see most of them in the following short (and slightly modified) excerpt from Arnold Schoenberg’s Wind Quintet op. 26:

Arnold Schönberg: Wind Quintet op. 26, beginning of second movement, oboe part (click to view PDF)

Arnold Schönberg “Bläserquintett|für Flöte, Oboe, Klarinette, Horn und Fagott|op. 26”, excerpt from oboe part, movement II (click to view PDF)
© Copyright 1925 by Universal Edition A.G., Wien/PH 230 www.universaledition.com

Such extensions are now available for Arnold, but I think this is an approach that can be built upon for other fonts. That way fonts can support additional features for their specific notation purpose, for example ancient or contemporary notation, musical analysis, or popular idioms. The nice thing about separating these as “extensions” is that it is completely separate from other stylesheets the user might select or create.

Anton Webern: Five Songs After Poems By Stefan George Opus 4

At the top of this article I wrote that it’s release date is not arbitrary. The motivation to publish it today is that the music of Anton Webern has passed into public domain last night. Therefore I want to take the opportunity to make some of his music publicly available today: The Five songs op. 4 after poems of Stefan George – which had been the initial challenge to develop Arnold for.

Original Edition (Click to enlarge)

Original Edition (Click to enlarge)

First page of our rendering (click to enlarge)

First page of our rendering (click to enlarge)

You may want to place the two pages side by side to see how close LilyPond already gets to the atmosphere of the original edition. This is not a finished score considered publication quality, though.

With the active assistance of Peter Crighton and Chris Yate (and help by discussion through several other people) I could prepare an edition of that cycle, entering and proofing the music and setting up the basic style sheet. However, it turned out that the music is extremely complicated to engrave, and therefore LilyPond obviously hits its limits of automatic engraving. This is not embarrassing once you start to realize how “hacky” the original edition actually is, but it will need more time to find solutions for all the challenges of the full score.

Therefore I do make the score publicly available today, but initially only in the form of a source code repository. When the score gets into a presentable state I will upload it to this post, but in the meantime I encourage anybody to join us and complete this first (legal) free edition of a Webern score as a real community effort.

by Urs Liska at January 01, 2016 07:00 PM