Components of Sound
When an object vibrates it creates a disturbance that
travels through whichever medium it is adjacent to, in terms of Sound waves
this is air. Basically sounds are pressure waves of air, if there wasn’t any
air there wouldn’t be a medium for sound to travel across. To describe how we
hear air, we can take the example of a person clapping. When a person claps the
air between their hands is pushed aside as they get closer together. The
quicker their hands come together, the higher the force and the faster the air
molecules are pushed away from their hands. These air molecules then travel in
all directions away from the two hands clapping; when this pressure wave comes
into contact with our ear drum we hear the sound.
In order for us as designers to use sound in our digital
productions, we need to understand the basic main components to which we can
manipulate; Intensity, Pitch, Tone and Duration.
Sound waves have a number of properties, in terms of
intensity this is the amount of energy the sound has over a certain area
(energy being measured as Amplitude). From this we can deduce that the more
energy a Sound wave has the higher its amplitude is, therefore the more intense
it is. In order to use sound intensity in digital media productions, we can
either increase or decrease the amount of decibels it has; in simple terms we
are effectively changing the loudness of the sound.
Pitch is what is used to distinguish sounds of the same note
but with different octaves. In scientific terms, pitch is dependant upon the
frequency of a sound wave. The more wavelengths there are within a certain unit
of the time, the higher the frequency it has. In order to measure Pitch we use
hertz and in order for us to hear sounds, the pitch must be between 20 to 20,000
hertz.
Tone is really all about the vibration and the quality of the sound itself. To explain, tone describes the difference between two sounds played in the same pitch and volume, for example the tone created by one beginner violinist is very much different to a whole symphony of violinists. In more detail when a sound source vibrates it has multiple frequencies, each one of these produces a wave. Sound quality depends on the combination of different frequencies of sound waves.
Tone is really all about the vibration and the quality of the sound itself. To explain, tone describes the difference between two sounds played in the same pitch and volume, for example the tone created by one beginner violinist is very much different to a whole symphony of violinists. In more detail when a sound source vibrates it has multiple frequencies, each one of these produces a wave. Sound quality depends on the combination of different frequencies of sound waves.
The last factor to consider when manipulating sound is the
speed of it. For example, by changing the playback speed of a sound we can make
an explosion sound like a gunshot or a voice sound like a cartoon chipmunk
character. Playback can also be manipulated to make sounds more sudden or more
subtle, for example slowing down audio can introduce us to other elements that
we may have not noticed. Take the example of the water in this video http://youtu.be/j_OyHUqIIOU?t=2m34s
One of the most highly noted debates in Sound Design is the
usage of Analogue against Digital and vise versa. In reality there is a huge
difference between Analogue and Digital sounds, for the most part of the 20th
century the usage of Analogue was main-stream but in the 21st
Century Digital sound has been much preferred to the producer and consumer.
A Sound wave that is recorded or used in its original form is known as Analogue. For example, the signal that is taken from a microphone inside a tape recorder is physically laid onto tape. Other example of Analogue Sound being recorded can be taken in the form of grooves (vinyl) or magnetic impulses. The other alternative to recording sound is Digital, Digital sound is produced by converting the physical properties of the original sound into a sequence of numbers which can then be stored and read back for reproduction. The process of recording digitally is taking an analogue wave, sampling it and then converting it into numbers so that it can be stored easily on a digital device (e.g. CD, iPod, Computer, etc).
A Sound wave that is recorded or used in its original form is known as Analogue. For example, the signal that is taken from a microphone inside a tape recorder is physically laid onto tape. Other example of Analogue Sound being recorded can be taken in the form of grooves (vinyl) or magnetic impulses. The other alternative to recording sound is Digital, Digital sound is produced by converting the physical properties of the original sound into a sequence of numbers which can then be stored and read back for reproduction. The process of recording digitally is taking an analogue wave, sampling it and then converting it into numbers so that it can be stored easily on a digital device (e.g. CD, iPod, Computer, etc).
The main advantages to using Analogue Sound is that it’s
considered a more accurate representation of sound and that because its been
physically recorded its in its natural, original form. By using Analogue you’re
basically getting what you’re hearing with no deficiencies or other
manipulations, it has been physically replicated. The deficiency in quality we
can get with Analogue sound is if it’s been over-played to the extent where we
permanently lose the quality. Due to the limitations in editing analogue sound
we cannot easily retrieve sound if the device that it’s been recorded on is
damaged. Additionally the physical element to analogue sound means we need to
rewind it in order to listen to a specific part of a sound, instead of just
skipping to a certain stage. Lastly, because Analogue sound records everything
in its input vicinity, we can also get unwanted audio such as noise or clipping
at certain frequencies
Interactive Media
Products
There are many different directions of which we can take
audio in Interactive Media projects, from sound-based games to button clicked
events Sound can be both informative and emotion conveying.
A good example of sound being used to emphasize atmosphere is within Computer games. Even though Computer games are mainly graphical, sound/music can be used to add atmosphere to the game whilst also strengthening the usage of the initial graphics.
A good example of sound being used to emphasize atmosphere is within Computer games. Even though Computer games are mainly graphical, sound/music can be used to add atmosphere to the game whilst also strengthening the usage of the initial graphics.
Although increasingly within the past 10 years or so there
has been an increase in the usage of Sound over graphics in Interactive
Products such as video games. This prioritizing of Sound over Graphics has lead
the user to using their imagination and associations in giving what they see on
the screen a lot more texture and depth to what we might first think. For
example, a 2D square being dropped from a height can have a lot more weight and
resistance if a smashing or thumping sound was applied to it as well. With
using Sound as a base to video games the entertainment value of the game can
skyrocket as the developers are making the user gain a unique picture of what
they perceive. Take the example of Bit.Trip.Runner, in this game the user is
rewarded not just with points at the end of the level but with the music that
they achieve during it. By collecting all the red crosses and gold pieces the
user is adding certain tones and flavors to what they’re initially hearing. An
example of Bit.Trip.Runner can be seen in the video below.
Even though Sound is an amazing tool to use in video games it doesn’t come
without it’s drawbacks. In order to add sound to an interactive project the
designer needs to deal with speed problems such as low bandwidth internet
connection or hardware limitations in processing speed, memory and specific
storage media. It is these types of restrictions that have made designers
hesitant to use sound and music to a great extent because of the possibility of
it not working for a majority of users. This is why in today’s media sound is
mainly used as a support to the graphics rather than a fundamental component.
As well as music and tones, speech is also a fundamental
tool for storytelling with voice-overs and narration. Because with narration
there is no physical emotion, the types of characters are very voice driven and
most of the time distinctive in the way they speak and the vocabulary they use.
Taking the example of a recent popular Indie game called “Bastion”, the player
hears a voice which guides them through the story and explains details which
are more entertaining and efficient when explained through audio. Watch the
video below from 1 minute and onwards, there we have a very distinctive
sounding narrator who explains what the character is doing, where he is going
and what is going on around him. This is an example of a disembodied voice,
basically the tonal characteristics and emotions are far more highlighted than
if the character himself had a physical representation, it leads the player to
his/her imagination.
Outside the world of video games sound is also very useful for applications on mobile devices or consoles. For example, an Interface can communicate to the user not just by the animation it does if a button is clicked on but by also the sound it plays when selected. It’s this usage of key tones that can create a positive progression/connection with the user as this sound is signifying that the person is going forward or beyond where they were previously browsing. In more detail, it’s quite a common practice for developers to code in high tones for when a user is going into a menu/area and low tones for when they’re exiting one. It’s this type of technique that personifies the action and creates an emotion that can provoke a feeling of ‘discovery’ or ‘excitement’, therefore creating a richer experience.
Outside the world of video games sound is also very useful for applications on mobile devices or consoles. For example, an Interface can communicate to the user not just by the animation it does if a button is clicked on but by also the sound it plays when selected. It’s this usage of key tones that can create a positive progression/connection with the user as this sound is signifying that the person is going forward or beyond where they were previously browsing. In more detail, it’s quite a common practice for developers to code in high tones for when a user is going into a menu/area and low tones for when they’re exiting one. It’s this type of technique that personifies the action and creates an emotion that can provoke a feeling of ‘discovery’ or ‘excitement’, therefore creating a richer experience.
In summary, we can structure out various sounds used in
Interactive Media products with Diegesis. Diegetic Sounds being part of an
action or what we might see (visual effects for Bit.Trip.Runner), or
Non-Diegetic sounds which are sounds that aren’t present within the scene
(voice-over for Bastion). A mixture of both can be referred to as a soundscape
which is a compilation of sounds (layered upon one another) that try to capture emotion or emergence, for
example the introductory sound when somebody boots up a Windows Computer or a
mobile device.
Functions of Sound
Beyond Interactive Media Sound can be used for a whole range of purposes. Mainly, Sound is used to improve an experience, for example when watching horror movies gloomy music/tones can help build up the atmosphere and sustain it. On the contrary to a gloomy score, cheery and upbeat music can invoke positive feelings such as happiness or joy. This usage of sound can be commonly known as Background Music, music that serves the purpose of creating an atmosphere or portraying the feelings of what a character should be feeling. Additionally as well as emotion invoking, background music can be used to express a passage of time, this can be known as a montage. This vimeo link shows a scene from “Breaking Bad”, basically 2 Methamphetamine cooks are producing a lot of product and are making a lot of money overtime. Specifically around 3:15 we’re shown a few shots that show our Protagonist (or anti-hero), Walter White, signs of boredom or frustration.
Another purpose of Sound can be Effects. Previously in Sound
for Interactive Media Products I spoke of transitions or events for mobile
devices or applications. Generally Sound effects can be used for more than just
interactive products, they can be used for events in video games (gunshot),
explosions in movies and other types of media. Without sound effects the
developers would be drawing away the player from a sense of realism that
they’re trying to invoke. Additionally sound effects can also be used to notify
a person that an event if occurring, for example websites such as Facebook
catch the attention of the user when somebody messages them. When you’re
messaged on Facebook a sharp, loud but quick noise is played so that the person
on the computer can definitely register that they have been contacted. These
types of noises specifically are known as prompt noises, they’re mainly used in
Interactive media to communicate with the user and register events.
Sound can also be used to emphasize a person’s sensory
experience such as lighting, smell, comfort, etc; this can be known as
Ambience. Ambience is effectively the noise of the environment; generally it’s
used to set a realistic scene by grounding the user in a certain location and
situation. By example, Ambience tends to be natural-sounding effects such as
the cacophony of crickets and the leaves at night or the crashing of tidal
waves during a stormy English afternoon on the beach. Like Background music,
Ambience is a more subtle effect of setting and sustaining the scene.
Within Interactive Media there is also voice recognition,
basically somebody with the right application can use their voice to
communicate with and control a certain device. For example with iPhones there
is a program called Siri which recognizes certain voice patterns and tones and
is able to communicate back and activate certain parts of your mobile device
accordingly. This type of sound is generally used to inform the user of events
going on around them rather than setting an emotional scene.
Annotate sounds in programs such as Logic or Audacity.
Collect a variety of different sound assets for a Library.
Apply effects which prove Creativity and Flair (Layered sounds, effects)
Final marks on the Sound will be applying it to objects within Unity.