Trackers! What are they?

Tracker is the generic term for a class of software music sequencers which, in their purest form, allow the user to arrange sound samples on a timeline across several mono channels. The interface is mainly numeric, notes are entered with a computer keyboard; with the parameters (effects, etc) being entered hexidecimally (numerically). A full song consists of several multi-channel patterns held together via a master list.  There are several elements common to any tracker program: samples, notes, effects, tracks (or channels), patterns, and orders.                               

Samples –  A sample is a digital sound file of an instrument, voice, or other sound.

Note –  A note designates what frequency a sample is played back at.

Effect –  An effect is a special function applied to a note. Common effects include vibrato, arpeggio, and portamento.

Track –  A track is a space where a sample is played back. Modern tracker software offers a virtually unlimited amount of tracks to use.

Pattern –  A pattern is a grouping of simultaneously played tracks that represent a full section of the song.

Order –  An order is a part of a sequence of patterns which defines the layout of a song. The History of Tracker Software

The term “Tracker” comes from a piece of software called “Ultimate Soundtracker”, the first tracker software. It was released in 1987 by Electronic Arts for the Commodore Amiga. The general concept of how the program works; by step-sequencing samples numerically, can be found in sampling work stations as far back as the late 1970’s. Most early tracker musicians were from the United Kingdom and
Scandinavia. This may be attributable to the close relationship of the tracker to the demo scene, which grew rapidly in Scandinavian countries. It grew very popular with home audio recording fans, as it did not require the expensive wavetable sound cards to function. During the 1990’s, after the invention of the Sound Blaster line of sound cards for the PC, tracker music gravitated over from the Amiga. The Gravis Ultrasound, which continued the hardware mixing tradition, with 32 internal channels and onboard memory for sample storage offered unparalleled sound quality and became the choice of discerning tracker musicians. Modern software and hardware has grown to the point of making tracker software obsolete, but it still lives on. It is used primarily in modern video games as well as a number of indie games. The tracker’s stigma of being complicated and difficult to learn is being discarded as the new tracker programs are becoming more and more user friendly. Tracking has recently enjoyed a mild resurgence as people begin to appreciate the importance of laying down music as quickly as possible. Some modern musicians that use trackers as part of their audio production process include, Brothomstates, Bogdan Raczynski, and Venetian Snares.
  Jason Cole and DiskFaktory Mastering offer great professional mastering services and information regarding audio engineering and CD mastering in California. Get the professional cd mastering information you are seeking now by visiting http://diskfaktory-mastering.com/evaluation.htm

Alright, thanks for coming back for my next installment in this article series concerning audio effects. In the past few weeks we have covered quite a few commonly used effects, such as reverb, delay, flange, and compression. These effects all have wide scopes of usage, and are probably used on a daily basis in most modern recording studios, on a wide range of different musical styles. We were discussing the basic functionality of each effect, leaving you to decide if and how you would end up using it in your own production process. 

Today we’ll be moving on to a couple of other types of effects, which are probably less used that the others we’ve talked about. Don’t let that fact deter you from reading any further, as these two effects; vocoder and auto-tune, are both very flexible and powerful tools for you to add to your studio arsenal. So, please strap on your learning cap and follow me. 

VocoderThe vocoder (its name being derived from “voice encoder”) is a speech analyzer and speech synthesizer. It was originally created for use as a speech coder for the telecommunications industry in the 1930s. It was used for secure radio communication, where voice has to be digitized, encrypted and then transmitted on a narrow, voice-bandwidth channel. The way that the vocoder works is that is finds the basic carrier wave that the human voice produces. This carrier wave is at the fundamental frequency (the lowest frequency in a harmonic series). Is is then measured how its spectral characteristics are changed over time by recording someone speaking. This results in a series of numbers representing these modified frequencies at any particular time as the user speaks. To recreate speech, the vocoder simply reverses the process, creating the fundamental frequency in an oscillator, then passing it through a stage that filters the frequency content based on the originally recorded series of numbers. For musical applications, a source of musical sounds (such as a guitar) is used as the carrier, instead of extracting the fundamental frequency. The vocoder is famous for creating robotic sounding voices, and has been used in film to create; surprisingly, robot voices. 

Auto-TuneAuto-Tune is used for correcting pitch in vocal and instrumental performances. It works by employing digital signal processing algorithms (many which are drawn from the geophysical industry) to continuously detect the pitch of aperiodic input signal and changes it to a desired pitch. The harmonization is intended to increase the musical quality of a vocal track without revealing the singing as processed. This works well in a studio environment to correct the performance of vocalists and musicians, after they have recorded their takes. It has also been widely used with extreme parameter values to create a distinct electronic vocal sound. This wraps up the 5th installment in the DiskFaktory Mastering article series on audio effects. Today we covered a couple of the more fun to work with effects, in my opinion. I myself have learned quite a bit writing today’s article, and I hope you feel the same way. Now we’re much better prepared to create a symphony of robot voices. 

Jason Cole and DiskFaktory Mastering offer great professional mastering services and information regarding audio engineering and CD mastering in California. Get the professional cd mastering information you are seeking now by visiting http://diskfaktory-mastering.com/evaluation.htm

Audio effects! Is there anything they can’t do? We’re continuing on with this series, this article being part 4 in the series. I was thinking about the best way to abstractly describe the function and importance of audio effects and knowing your effects well. This is what I came up with. Your studio is basically your tool box, with all your effects and gear being tools in your tool box. Most people know their tools pretty well, but most are not masters. To hammer a nail, ideally you’d want to use a hammer. It would be the most efficient and easy way to do it. You could use a screwdriver or even a wrench to do the same job, but it may take more time and your end result might not be up to your standards. So, basically I’m trying to say, you need to master all of your tools before you can produce and edit music correctly. Well, that was a long winded explanation for a simple idea. Moving on. 

Today we’re going to be discussing phase shifting and chorus effects. Phase shifting is kind of cool, and I’m really excited to delve into how it works. Chorus is a basic effect, and may not elicit excitement in most of you. But like any effect, it’s one of those that is used all over the place so often that you probably can’t tell when it’s used. Anyways, let’s discuss how these effects work and why they work the way they do.  

Phase ShiftingThe first phase shifting effect units were pretty simple. Phasing was originally produced by copying the sound onto two analogue tape decks and mixing them together. One deck was run slightly faster than the other and the phasing effect was created by the rising and falling “wave interference” of the two signals. The term phasing more specifically refers to a swept comb-filtering effect where there is no linear harmonic relationship between the teeth of the comb. A flanger is a sub-type of phaser, with its effect usually being more precise, produced by the harmonic relationship of the comb filter being linear. Phasing effects in modern music are typically used in conjunction with electric guitar, and it is also used to “sweeten” the sound of electric keyboards. Also, a fun fact is that a phaser was used to create C-3PO’s voice in the movie Star Wars because the phaser sound lends a synthetically generated feel to the human voice. 

ChorusWhen chorus is used, individual sounds with roughly the same timbre and nearly the same pitch converge and are perceived as one. When it is successful, all the sounds hold the same tune and it sounds as if they all came from the same source. The chorus effect is enhanced when the sounds originate from different moments in time and from different physical locations. To produce this effect artificially, a computer processor takes an audio signal and mixes it with one or more delayed, pitch-shifted copies of itself. This results in the production of a single sound that simulates the sound of several instruments or sounds. 

Alright, this wraps up the 4th installment in my audio effects article series. I never knew how the chorus effect worked, and now that we discussed it, it seems like the name of the effect is exactly what it does. And phase shifting was sort of a carry-over from the article discussing flange. But since flange is basically a type of phase shifter, I think that it was very important that we discussed it in this article. Anyways, hope you all learned something in this article. Please stay tuned for my next installment in this continuing series.  

Jason Cole and DiskFaktory Mastering offer great professional mastering services and information regarding audio engineering and CD mastering in California. Get the professional cd mastering information you are seeking now by visiting http://diskfaktory-mastering.com/evaluation.htm

Moving on down the line, today we continue our series on audio effects and editing tools. In the past few articles we’ve such effects as reverb, flange, delay, and noise gate. If you enjoy dabbling in audio production, you’re going to enjoy today’s article. We’ll be discussing compression, which is instrumental in the audio production arena. And also we will also be discussing ring modulation, which is a bit more fun and flexible audio effect. So, in today’s article get ready to learn about both, compression, and ring modulation. Let’s discuss how these effects work and why they work the way they do.  

CompressionCompressors reduce the dynamic range of an audio signal, if its amplitude exceeds a set threshold. The amount of range reduction is determined by a set ratio. If the ratio was set to 6:1, the dB would need to be increased by 6 to increase the output signal by 1 dB over the threshold. The way that a compressor reduces dynamic range is by using a variable-gain amplifier, which reduces the gain of an audio signal. Analog compressors typically carry this out by using a voltage controlled amplifier, which reduces the gain as the input signal’s power increases. Digitally, compression is carried out via DSP (digital signal processing), and this is the most modern version of the effect. The main use of compression is to make music sound louder without increasing its peak amplitude. Compressing the peak, (loudest signal), allows you to increase the overall gain without exceeding the dynamic limits of your reproduction device. Compression is widely used in TV and radio, allowing maximum perceived volume, without going over the strict limits imposed by most broadcasting companies. 

Ring ModulationRing modulation is achieved by multiplying two audio signals, with one signal being a simple waveform such as a sine wave. They combine the two signals, outputting the sum and difference of said signals. Ring modulation is related to amplitude modulation and frequency mixing, and it produces a signal rich in overtones. It is well suited to produce metallic and bell-type sounds. Modern ring modulators, like modern compressors, use digital signal processing to produce the effect. Using DSP to do this produces a mathematically perfect signal output, which some musicians do not like. You can come up with some interesting harmonics using a ring modulator by changing the frequency of the two input waveforms.  

This is the third installment in my continuing series on audio effects and engineering tools. We discussed compressors and ring modulators today, a couple of very interesting and deep effects. I learned a lot myself, so I hope that you did as well. We will be continuing this series indefinitely, until we run out of effects! I hope that this has shed a little light on these two amazing pieces of equipment, ultimately making your next music project a bit more interesting and productive.  

Jason Cole and DiskFaktory Mastering offer great professional mastering services and information regarding audio engineering and CD mastering in California. Get the professional mastering information you are seeking now by visiting http://diskfaktory-mastering.com/evaluation.htm

Continuing the audio effects series, again, we all know of audio effects and what they generally are supposed to do. They are used to manipulate audio in ways that are not available with traditional playing and recording techniques. If you’re like me, and enjoy dabbling in audio production, you’re probably familiar with all the basic effects and maybe some other types. Noise gate will be one of the topics of discussion today. Noise gate, what the heck is that? If that was your first reaction, you’re not alone. Please don’t worry; we will be demystifying this subject later on in the article. We will also be discussing flange, which is a more standard and widely used audio effect. So, in today’s article we will be discussing both noise gate and flange effects, how they work and why they work the way they do.  

Noise GateBasically, noise gate is a device or software logic that is used to manage the volume of an audio signal, in recording studios and in sound reinforcement. They are also used by musicians, in a portable form, to control amplification noise. At its most simple form it controls noise by only allowing sound to pass through it at a certain set threshold. Think of it as a literal gate; when the gate is open sound can pass, when the gate is closed no signal is allowed through. More robust noise gate units have extra controls, I.E. attack, sustain, decay, release. This is so that you can further control the gating of your audio. Say you’d like to have the gate applied in a hard fashion, you would set a short attack and a short release, so on and so forth. Noise gates are often used to isolate background noise from live recordings in order to eliminate them from the final copy. 

FlangeFlange is related to the phasing effect produced by a, well, phaser effects unit. It is produced when two identical signals are mixed together, with one of the signals time-delayed by a small and gradually changing amount. The amount is usually equal to or less than 20 milliseconds. Peaks and notches are produced in the combined frequency spectrum, related in a linear harmonic series. Part of the output signal is fed back in and resonates, intensifying the peaks and notches. This effect was originally generated with 3 three headed tape machines. Two of the tape machines would play the signal, obviously somewhat out of sync, and the third tape machine would record the output. The modern version of the effect is created using DSP (digital signal processing) technology. 

This is the second part in my continuing series on audio effects. Today we discussed noise gate and flange, we’ll be moving on some more advanced effects later on. I hope that this helped you all understand the basic functionality of these two effects, ultimately making your next foray into audio editing a bit less intimidating.  

Jason Cole and DiskFaktory Mastering offer great professional mastering services and information regarding audio engineering and CD mastering in California. Get the professional mastering information you are seeking now by visiting http://diskfaktory-mastering.com/evaluation.htm

Audio effects, we all know what they are, sort of. They are used to manipulate audio in ways that are not available with traditional playing and recording techniques. If you’re like me, and enjoy dabbling in audio production, you’re probably familiar with all the basic effects. Reverb is one of them, and probably the most easy to explain; it adds space to your audio. Delay on the other hand, is a little bit more difficult to explain. Again, if you’re like me, you want to fully understand how these effects work, so that when you go to use them you know them inside and out. Today’s article we will be discussing reverb and delay, how they work and why they work the way they do.

  

Reverb

Sound produced in an enclosed space, reflects off of surfaces and blends together, creating reverberation (reverb for short). So, basically, reverb is the reflection of sound waves from a solid surface to our ears. It is most easily identified when the sounds stops, but you continue to hear the reflections as they decrease in amplitude. Large rooms or chambers are some of the best producers of natural reverb. There are a few different types of electronic reverberation mechanisms that produce reverb artificially. There types are:

  

1.        Plate reverberators – This type of reverb uses large metal plates suspended by strings, which are in turn inside of damped cases to manufacture the effect. Transducers are used to apply a signal to the plates, and electronic pickups are then used to convert the plate’s vibrations to an electric signal.

2.        Spring reverberators – These reverberators are similar to plate reverberators, except instead of using plates, springs are used instead. Spring reverberators are often integrated in instrument amplifiers, and are considered to be the most artificial sounding reverb types.

3.        DSP reverberators – DSP reverb units use signal processing algorithms to create the reverb effect, using long delays, envelope shaping, and other processes. This type of reverb is the most widely used and the most flexible form of reverb.

4.        Chamber reverberators – This is the most “natural” form of reverb, but can also be made artificially. Chamber reverb is basically a room with solid walls, a loudspeaker at one end, and microphones at one end. The audio is played through the loudspeaker, bounced off of the walls, and then recorded by the microphones.

  

Delay

The basic delay effect records an input signal, and then plays it back after a set period of time. The first wave of delay used reel-to-reel magnetic recording systems and tape loops to produce the effect.

  

5.        Analog Delay – This was the first type of delay employed in the audio engineering field. One type of analog delay unit used magnetic tape as the recording and playback medium. Motors would guide the tape through the device, with different mechanisms modifying the effect’s parameters. The tape used in this type of delay would break down after a while, so the tape would have to be replaced from time to time to maintain fidelity of the audio. Other types of analog delay used magnetic drums, or spinning magnetic discs instead of tape as a storage medium for the audio information. The main advantage to these types was the increased durability of the storage medium.

  

6.        Digital Delay – This type of delay unit became popular in the late 1970’s. But, at the time, were only available in the form of an expensive rack mounted unit. The BOSS DD-2 changed that in 1984, as it was now available in the form of an affordable foot pedal. Digital delay works by sampling the piece of audio being processed, recording the bit to a storage buffer, and then playing back the bit of audio based on the parameters set by the person using the unit. There are many different types of digital delay units that offer different digital signal processing options, so I can’t really expound on anything in that area. But in my opinion, digital delay effects units seem to be the most powerful and flexible of the two types. Many guitar players use this effect, although some people believe that digital delay sounds a bit artificial compared to its analog counterpart.

  

This is the first part in my continuing series on audio effects. I’ll be covering some of the more standard effects first, like today’s subjects, and then move on to the more advanced effects later on. I hope that this shed some light on the subject, making your next foray into audio recording or editing a little easier and more fun.

  Jason Cole and DiskFaktory Mastering offer great professional mastering services and information regarding audio engineering and CD mastering in California. Get the professional mastering information you are seeking now by visiting http://diskfaktory-mastering.com/evaluation.htm

If you have been following my Microphones 101 series of articles, you would have read about 4 of the most important microphone types. These were: Dynamic Microphones, Condenser Microphones, Ribbon Microphones, and Carbon Microphones. This last update will cover a couple less used mic types. Laser Microphones, lavalier mics, contact microphones, and parabolic microphones. If you jumped in on this article without reading the first two, here a little re-cap of how microphones work. How microphones work – in a nut shell.

A microphone captures sound waves with a thin and flexible piece of metal known as a diaphragm. Sound waves are introduced into the microphone, vibrating the diaphragm. The vibrations are then converted by various methods into an electrical signal that is an analog of the original sound.  There are various types of microphones, today we’ll be discussing laser, lavalier, contact, and parabolic microphones. 

1. Laser Microphones

A laser microphone utilizes, well, laser technology to capture vibration and convert it into sound. The laser will be reflected off of glass or another flat and rigid surface that vibrates with the sound nearby. The laser measures the distance (very accurately) between itself and the surface it is reflecting off of, and measures the fluctuations of that surface when the sound waves from nearby vibrate said surface. This form of mic is portrayed in movies as spy equipment. But contrary to it’s portrayal, the device is very new, expensive, and not very portable. 

2. Lavalier Microphones

This type of microphone is commonly used for hands-free operation, usually clipped to a person’s lapel. They usually have their own power source and can run directly into the mixer, or may be wireless, making it ideal for television. 

3. Contact Microphones

In the world of microphones, contact mics are a little different than the rest. They are designed to pick up sound vibrations from solid objects, instead of vibrations carried through the air. This is mainly used to record low level sounds that you would not be able to pick up with a regular mic. These mics consist of a moving coil transducer, a contact plate and pin. The contact plate is placed on the object which you would like to record, the vibration is passed through the plate to the pin which passes it to the transducer. The experimental electronic music group Matmos used this on their album “A Chance to Cut is a Chance to Cure”, to record the neural activity of a crayfish. 

4. Parabolic Microphones

Parabolic microphones use a parabolic reflector to pick up and focus sound waves for a microphone receiver. It is similar in function to a satellite dish in the way that it pick up radio waves. These mics are commonly used for law enforcement surveillance. But they are not very well suited for regular recording, as their low frequency response is very poor. So this wraps up our Microphones 101 series of articles. I hope you all learned as much from these articles as I have from researching and writing them!  Jason Cole and DiskFaktory Mastering offer great professional mastering services and information regarding audio engineering and CD mastering. Get the professional mastering information you are seeking now by visiting http://diskfaktory-mastering.com/evaluation.htm

Hi all,

Welcome back to our Microphones 101 series. Here is the 2nd entry in our series – Ribbon and Carbon mics….

——————————————————————————————————————

In my last article we covered two different types of microphones – Dynamic and Condenser. In this article we will be discussing two other types, ribbon and carbon microphones. I will be presenting some basic information regarding how these microphones function, and what their respective applications center around. Before we jump into the two new types of mics, here’s a little basic refresher on how microphones work. 

Microphones, how do they work?A microphone captures sound waves with a thin, flexible piece of metal, also known as a diaphragm. When sound waves are introduced into the microphone, the waves vibrate the diaphragm. The vibrations are converted by various methods into an electrical signal that is an analog of the original sound.  

There various types of microphones, today we’ll be discussing ribbon and carbon microphones. 

  1. Ribbon Microphones

The diaphragm of a ribbon microphone is usually a corrugated piece of metal suspended in a magnetic field. This ribbon picks up sound in a bi-directional or figure 8 pattern. This directional pattern can be modified by enclosing one side of the ribbon in a baffle, or other acoustic trap. Ribbon microphones give very high quality sound reproduction, but the diaphragm is very fragile and the mics are very expensive, so they must be handled with care. They don’t require phantom power, and in fact, any voltage might damage the microphone. Ribbon microphones are very versatile, and can be used to record all instruments and vocals. 

  1. Carbon Microphones

Carbon microphones were once commonly used in telephone handsets, but are now not very widely used. This type of microphone uses carbon dust pressed between two metal plates. An electrical signal is passed between the two plates, with the carbon in between, causing the plates to vibrate. The sound waves from your voice cause changes in the vibration of the plates, which compress and decompress the carbon powder, changing the electrical resistance of the carbon. This type of microphone isn’t used very often in modern times, so most audio engineers won’t need to know too much about how a carbon microphone works. 

There are many different types of microphones with all sorts of different applications, both applicable and archaic. As an audio engineer, or even as someone who records music from their bedroom, knowledge is key. I will be wrapping up this series on microphones in my next article, so please read on. 

Jason Cole and DiskFaktory Mastering offer great professional mastering services and information regarding audio engineering and CD mastering. Get the professional mastering information you are seeking now by visiting http://diskfaktory-mastering.com/evaluation.htm

Hi everyone,

Today marks the start of our Microphones 101 article series. We will be discussing many different mics and how they work. The first article in this series concerns Dynamic and Condenser mics. Please leave comments and let us know if you found this helpful.

Thanks,

Jason Cole

DiskFaktory – Webmaster

Microphones for Musicians – Dynamic and Condenser

If you are a recording engineer, you probably already know everything there is to know about microphones. But if you are a musician who is recording from home, you might not. When it comes to recording audio, microphones are the most important piece of equipment you’ll purchase. Most experts recommend that your main microphone should cost at least 30% of the recorder you’re using. And even then, the cheapest microphone you will want to use will be at least $100.00.

How do microphones work?

Microphones capture sound waves with a thin, flexible diaphragm. When you sing into the microphone, the sound of your voice vibrates this diaphragm. The vibrations of the diaphragm element are converted by various methods into an electrical signal that is an analog of the original sound.

There are many different types of microphones, we’ll be discussing dynamic and condenser microphones today.

1. Dynamic Microphones

In a dynamic microphone a small movable induction coil, is positioned in the magnetic field of a permanent magnet and is attached to the diaphragm. When sound enters through the microphone, sound wave vibrations move the diaphragm. The diaphragm vibrates the coil. The coil moves in the magnetic field, producing a varying current through electromagnetic induction. Dynamic microphones can be used for many different applications, they are relatively inexpensive, and resistant to moisture. They are an excellent choice for singers, and recording vocalists.

2. Condenser Microphones

A condenser microphone (also known as a capacitor microphone) is essentially a capacitor, with one plate of the capacitor moving in response to sound waves. The movement changes the capacitance of the capacitor, and these changes are amplified to create a measurable signal. A capacitor is a device that stores energy in the electric field created between a pair of conductors. They usually require a power supply, and condenser microphones can be expensive, so they might not work for everyone. Although, they do produce high-quality sound signal, so they are the preferred choice in laboratory and studio recording applications.

There are many different types of microphones with all sorts of different applications. In the next few articles I write I will be discussing how each of them function, and what applications are best suited for each microphone. I hope that this article left you better educated on how dynamic and condenser microphones function.

Jason Cole and DiskFaktory Mastering offer great professional mastering services and information regarding audio engineering and CD mastering. Get the professional mastering information you are seeking now by visiting http://diskfaktory-mastering.com/evaluation.htm

If you have read my last article, “What does an audio engineer do when mastering music?”, you already know what is involved in the professional mastering process. To re-cap what that article said to all who haven’t read it, the mastering process adds polish to your songs and makes them sonically cohesive. A lot of albums are recorded and then thrown on a disc, sans mastering. While this works fine, by no means do I recommend it. There are a few reasons why I wouldn’t recommend doing this.

1. Mastering adds a professional, commercial sound to your songs or album.

All of your favorite albums and bands you hear on the radio have had their audio mastered by a professional mastering engineer before it was sent to CD manufacturing facility. This makes sure that you hear all the CD recordings low-end bass, mid-range, and highs crisply.

2. Audio mastering allows another set of ears to evaluate your audio.

Having another skilled audio technician listen to you recording is always a plus. They can bring a fresh perspective and ideas to your album production. Your recording and mixing engineers spent hours and hours listening to your music, someone who was not present and has a skilled ear can point out and help better the quality of your finished project.

Audio mastering is a vital step in the recording and CD manufacturing process. This article should help you understand why professional mastering is a step you should not leave out of your next recording project. All commercially released audio CDs utilize the CD mastering process, and you should do the same.

Jason Cole and http://www.DiskFaktory-Mastering.com offer great services and information regarding audio engineering and CD mastering. Get the professional mastering information you are seeking now by visiting http://diskfaktory-mastering.com/evaluation.htm