I put together an ebuild for Neil, a Buzz/Aldrin-inspired tracker/sequencer for Linux. Why? Because the interface looks more user-friendly than other alternatives. And the ebuild was really simple; took maybe 10 minutes to write it, thanks to the magic of scons-utils.eclass and mercurial.eclass. I didn’t even bother to see if anyone else had written an ebuild for this thing, since it was so short and simple. Neil is now available in my overlay.
Its UI, judging from the videos, reminds me of Qtractor, at least in playback mode. I bought Renoise, but I’m still scared of its dizzying vertical scrolling, and its arcane alphanumeric entries line-by-line. Hopefully Neil provides a more gentle introduction into the world of trackers and sequencers . . . I still want to use Renoise for its primary purpose, not just as an instrument/effects host, or a pre/post production audio manipulation environment. The thing is powerful, but I need something that feels more intuitive, at least to start out.
* * *
Oh, and lately I’ve made a bunch more music. I found myself doing one creative thing per day, much of it recording new tunes. Results are on SoundCloud. There’s even a couple of new videos to check out. More audio/video pieces made with Gentoo.
weldroid liked it enough to share with me his vision of the track, which was considerably re-amped and synth-fuzzed, and he gave my sketch a title: “light showers,” which conjured up all kinds of awesome imagery. i liked his interpretation of the material enough to modify his stems, and then perform my modified version live, unrehearsed. it’s even beatier and volt-warped than weldroid’s arrangement, thus: “electrified.”
ioflow > weldroid > ioflow > your eyes+ears
i used rove to perform this piece, on two monome 128s, which were joined into a single virtual 256 by griddle. the top grid is my older secondhand kit-built 128, while the bottom grid is my new grayscale 128. its leds are actually very white; the camera gives them a pleasant blue tint.
my part of the music was produced and recorded with gentoo linux, including editing the video. ardour and openshot are a good combination, even though manually syncing the audio to the video is a very finicky, time-consuming process. it always seems to work out.
i haven’t been making much music in the last week. mostly been coding, hacking, and working on things that will allow me to create music and strange new sounds.
for example, i got madrona labs’ aalto software synthesizer working in linux. they just released the windows port a few days ago, and have already expressed interest in creating a linux port, so i pointed them to the cross-platform, open-source juce framework. meanwhile, i set out to get the windows version running in linux.
i put together an ebuild for dssi-vst, then loaded up the 32-bit version of aalto. dssi-vst doesn’t support running 64-bit dlls on 64-bit hosts. for that, i would probably have to run a windows 64-bit host, such as vsthost or savihost, inside a 64-bit version of wine, with aalto loaded into one of those hosts.
in brief, here’s how to run aalto:
1. add the overnight overlay, then install the vst host: # echo media-plugins/dssi-vst >> /etc/portage/package.accept_keywords
# emerge dssi-vst
2. download, unzip, and install the windows version. i used the demo; the full version almost certainly works just as well. $ unzip AaltoWinDemo1.2.zip
$ wine AaltoWinDemo1.2.exe
3. next, copy the dlls to your shared vst/plugin directory. on my multilib 32/64-bit system, /usr/lib32/vst and /usr/lib64/vst. $ cp ~/.wine/drive_c/VSTPlugIns/Aalto.dll /usr/lib32/vst
$ cp ~/.wine/drive_c/VSTPlugIns/Aalto_x64.dll /usr/lib64/vst
4. run the 32-bit aalto plugin. note that dssi-vst can’t load 64-bit plugins, so you’ll have to use some other host. but it should be doable. $ vsthost /usr/lib32/vst/Aalto.dll
i’ve also managed to semi-successfully get ableton live running in wine. however, it’s kinda slow on my lowly 2.3ghz dual-core amd system, so i can’t recommend it as your sole audio tool. wineasio is required to really do anything with it, but you should still be prepared for significant slowdowns, freezes, and other bugs. there are a fair amount of issues and workarounds on the wine wiki page for live.
besides getting windows software running on linux, i’ve been creating ebuilds for linux-native vsts, such as the mda and tal plugins. ebuilds are in my overlay. both sets of plugins are capable of some really intriguing sounds, and i’ve barely started digging into what they can do.
i submitted my sample for mcrp 10, the next monome community remix project album. number 10′s concept is to choose a 15-second sample from previous mcrp releases, toss them in a pool, then the participants pull from the samples to create their own works. it’s gonna be a lot of fun! i learned a lot about what renoise can do when preparing samples.
two music + performance videos i created in the last few days, using gentoo linux, a monome, and software. detailed descriptions of the applications and production process are on the individual video pages.
ricochet and renoise running on a monome 128. what you see is entirely unrehearsed. as a generative instrument, ricochet never quite repeats itself. ricochet 0.2 is a generative midi sequencer. renoise 2.7 provides all the sounds.
rove and polygomé running on a monome 128. it is divided into two virtual monome 64s by griddle, a spanner/splitter router application written in python.
running inside griddle is pages, a java-based utility that executes multiple applications on the same monome. inside pages are two more applications, one for each virtual half of the physical device: rove, an mlr-style live sample cutter, and polygomé, a midi instrument. rove is on the left half, polygomé on the right.
i’d need several pieces of paper to create another flowchart as i did for this song, as the internal audio/data routing was quite complex, even though i’m only using two physical controllers.
once again, renoise provided the percussion, controlled by polygomé, while the rhodes piano samples are from jared smyth.
my apologies for the audio quality of this video. i forgot to press “record” in jack timemachine. even though the percussion is hard to hear for most of the video, a rather nice lo-fi feel emerged from the rest of the performance.
i’ve had to write a lot of ebuilds for all the applications i’ve been using, which i’ve made available in my overlay on github.
the videos themselves were filmed with my cell phone camera (htc mytouch 4g), and edited with openshot. there’s an openshot ebuild available on bugzilla, but it’s outdated. i’ll put my local version up on the overlay soon; it has some improvements. side note: openshot can’t show the audio waveform for clips, which makes it very hard to sync up sounds. i try to record the audio in my computer, since my cell phone’s mic is terrible, and it has no audio input ports. for “falling up,” i had to sync audio sources the old-fashioned way, by ear. being able to visualize the waveform, and then dragging into position over the old one, would be really nice. any video editors for linux that can do this?
while the my desktop workstation’s old dual-core athlon 4450e is just fine for audio processing, especially since i’m using the ck-sources 2.6.39 kernel, it’s way too slow for transcoding and rendering video. the five-minute “falling up” video took more than forty minutes to transcode from 3gp to h264, with a plain 80mb wav file for the raw audio. i’m definitely thinking about switching over my workstation to one of the latest quad-core intel sandy bridge chips.
i wish there was a way to use my gpu to speed up encoding, but i’m using the xf86-video-ati driver with a radeonhd 4550. pretty sure there’s no solution for that combination. just gotta throw more hardware at the problem until i see better results.